Hello, friends,
You might be excused for thinking that the biggest news in tech this past week was Elon Musk’s bid to buy Twitter. But, in fact, the most consequential moment in tech policy last week was when European lawmakers reached a landmark agreement on the most ambitious plan in the world to provide meaningful regulation and oversight of Big Tech.
The details and text of the Digital Services Act (DSA) have not yet been released, but the framework that has been publicized describes a sweeping new set of standards for large tech companies. The law is aimed squarely at requiring tech companies to actively police their platforms for illegal content, such as hate speech and terrorist content as defined by the European Union.
Some highlights of the law’s requirements are:
- Swift removal of illegal online content, whether it is hate speech or an illegal product.
- Algorithmic audits. Big Tech platforms will be required to do annual audits of how their algorithms affect democracy, human rights, and the physical and mental health of minors and other users.
- A ban on microtargeted advertising. The EU initially sought to ban all behavioral ad targeting, but eventually settled for banning profiling of minors, and the use of sensitive data pertaining to sexual orientation, religion, and ethnicity.
- A ban on “dark patterns”—design choices that mislead or steer people into decisions they may not otherwise have made.
- “Know Your Business Customer” rules that require online marketplaces to have more accountability for the people they allow to sell on their platforms.
- User choice of algorithms. Platforms will be required to give users access to at least one algorithm that is not based on a behavioral profile. (Think Twitter’s chronological timeline as an example.)
- Dispute mechanisms. Users will have the right to appeal decisions made about the removal of their content.
- Huge fines. Platforms that do not comply could face fines of up to 6 percent of revenues— which could amount to billions of dollars in some cases.
In other words, just in the nick of time, the whims of a billionaire who takes over an online platform will not be the final word in how speech is governed on that platform. At least in Europe, the laws will require some sort of policing of content whether Elon Musk likes it or not.
To understand the sweeping new directive, I spoke this week with Joris van Hoboken, who leads the Digital Service Act Observatory project, which has followed the development of the legislation and provides a venue for experts to discuss the proposals. Van Hoboken is an associate professor of law in the Brussels School of Governance LLM program, and a senior researcher at the Institute for Information Law (IViR), University of Amsterdam.
Our conversation, edited for brevity and clarity, is below.
Angwin: I wanted to start with the slogan that politicians have been using to describe the DSA, which is “what is illegal offline must also be illegal online.” Is that a fair description of the law?
Van Hoboken: I think it’s a very lame slogan. There are competing narratives for what they’ve done. But what is very clear is this is a European version of the NetzDG law, which passed in 2018 in Germany. That law was about making sure that legal provisions that prohibit certain types of speech were actually enforced by internet platforms and social media in particular.
As more countries started adopting these types of laws, it started to cause a fragmentation of the European digital single market. And that’s what Europe doesn’t like at all. So, the DSA is really about making sure this is a European law matter.
Angwin: The EU has created other policies that outline content moderation obligations for big tech companies, such as the 2016 Code of conduct on countering illegal hate speech online. How does the DSA compare to other actions?
Van Hoboken: Over the past few years, we have seen people lose faith in self-regulatory instruments like the 2016 code of conduct. After 20 years of working with a limited liability framework, the DSA signals a shift from the self-regulatory paradigm to an environment where the European Commission is saying, “We are going to regulate you and we are going to require you to follow a set of standards, conditions, and safeguards.”
The DSA is not a self-regulatory approach to illegal content but instead makes more explicit requirements of companies. These requirements are not specific to particular pieces of illegal content, but are more systemic. For example, companies have to be transparent about their terms of service. They have to offer transparency reports about how much illegal content they’re removing. They have to have a notice and takedown procedure for illegal content to fulfill specified criteria, and have a mechanism for users to complain about removals, along with other measures. For very large platforms, risk assessment and mitigation reports are also required.
Angwin: In the past, the European Union has been criticized for lax enforcement of regulations, such as its landmark data privacy regulation, the GDPR. How will these new obligations be enforced?
Van Hoboken: Concerns from our experience with GDPR definitely influenced how the DSA will be enforced. In Europe, we have the country of origin principle for internet-based services, which states that if a tech company is established in one country, it has to follow the laws of the country where it’s established, and it doesn’t have to deal with 26 other regulators. From a business establishment point of view, this is very important. However, as platforms started to have a greater impact on every country in Europe, there was a bit of pushback, where countries did not want to be dependent on enforcement in a completely different country.
The DSA represents a compromise. For large, dominant platforms, the European Commission has the role of investigating and enforcing the DSA. This centralized enforcement is quite an interesting outcome of the whole process. Additionally, each member state will have an independent regulator, called a digital service coordinator. The specifics of these regulators have yet to be fully developed by the commission and the European countries.
Angwin: What might this new type of centralized enforcement look like and does this raise concerns about censorship?
Van Hoboken: Let’s use YouTube as an example. YouTube will have to conduct an assessment of a number of specifically identified systemic risks, including the risk of spreading illegal content, the risk of negative effects for exercising particular fundamental rights, and the risk of manipulation. Then they have to say, “We are adopting these mitigating measures to deal with these risks.” The commission would then look at whether the platform is doing a good job documenting, assessing, and mitigating these risks according to the legal standard. It’s going to be similar to a regulatory structured dialogue, with the DSA introducing a more structured framework for the commission to refer to.
As for censorship, that is indeed a concern. Something I will be watching is how exactly the European Commission will oversee and monitor companies. Initially, the big platforms are pretty much in the driver’s seat to produce a bunch of documentation, but you can imagine some of it may not be convincing. You can also imagine that some particular risks may not be identified. I think one of the benefits of the DSA is it does provide a stronger legal anchoring, but there’s definitely also the possibility for overreach.
Angwin: The DSA has the first ban on microtargeting for sensitive categories, right?
Van Hoboken: Yes, and that was one of the big battles. Tech companies argued that online advertisements finance the internet. Facebook actually bought big newspaper ads to say how they’re helping small businesses reach their customers. Despite this, there was quite a big push to basically get rid of behavioral advertising overall and to move away from that business model. I haven’t seen the final text of this, but the outcome seems to be that the DSA is very similar to what is already in the GDPR. The GDPR doesn’t have the specific rules with respect to behavioral advertising like the DSA does, but it has specific rules with respect to what you can do with sensitive data, and it also has rules with respect to processing the personal data of minors. It’s a lot of work to interpret and enforce those provisions, so the DSA may be helpful in that it makes some of those implied prohibitions explicit. However, it doesn’t really go much further than that, which I think for some is a disappointment.
Angwin: The DSA gives users more rights, including the right to appeal content removal decisions and access algorithms that aren’t based on behavioral data. What is your understanding of these greater user control mechanisms?
Van Hoboken: The DSA contains a general rule that a company’s terms of service have to be enforced in a proportionate, diligent, and objective manner. This includes the enforcement of a specific company’s policies, which often go much further than just illegal content since there are a lot of categories that are legal but harmful. Related to this is the complaint procedures in the DSA. These allow users to complain to the platform if they feel their content was wrongfully removed or to the digital service coordinator if they believe there was a violation of the DSA more generally.
It’s not clear to me that these complaint procedures will always be used in the right way. I could see the complaint procedures being weaponized by right wing populist communities quite easily, because it’s quite easy to complain, and you can use them for political reasons and draw attention to your cause.
Angwin: It seems like (Facebook whistleblower) Frances Haugen’s testimony and recommendations were very influential, do you agree?
Van Hoboken: Yes, I think so. The type of risk management framework included in the DSA is aligned with the traditional business logic that Haugen has also advocated for. We see this type of risk-based approach—where a lot of emphasis is placed on risk management—included in a bunch of regulatory areas, like the AI Act. Taking this into account, I think Francis Haugen spoke to the logic that had already been put forward by the regulators.
I think she actually would have liked to see these proposals be much more advanced. One of the things that Haugen said in Parliament is that yearly risk reports don’t make sense, and that you have to have continuous access to how these systems are operating. I think Haugen had a more radical vision of the model than the one that has been put forward. Perhaps this is where we will be in five years, since this is really a first attempt by European lawmakers, and the DSA is a new institutional setup. I think there’s an understanding that there will need to be some improvements and iteration on what has been put forward.
One of the things that drove the EU lawmakers to move quickly is that they wanted to set an international standard like they did with the GDPR. That being said, we have to look carefully at what kind of standard we are setting. Internationally, including in the Global South, certain elements could be picked up from this regulation, so it is important to look at what type of impact this has internationally and how businesses go about applying these rule.
As always, thanks for reading.
Best,
Julia Angwin
The Markup
Additional Hello World research by Eve Zelickson