Skip navigation

The Breakdown

Section 230 Just Survived a Brush with Death

The U.S. Supreme Court opted not to throw the internet as we know it into utter chaos—for now

Illustration of the Google logo and Twitter logo standing in front of the U.S. Supreme Court building, looking quite scared.
Gabriel Hongsdusit

For those of you worried about the Supreme Court breaking the internet, you can breathe easy. The court left Section 230 of the Communications Decency Act unscathed—for now—in opinions released today on two closely watched cases that many observers worried could shake the foundations of online discourse had they gone differently. 

In a pithy three-page opinion, the court vacated Gonzalez v. Google. The case explored whether Google was liable for acts of terrorism perpetrated by the Islamic State group in Paris in 2015 because the group used YouTube to spread violent messages. A related case, Twitter v. Taamneh, examined whether arguing that online platforms were responsible for the effects of violent content posted by the terrorist group. In a more lengthy opinion authored by Justice Clarence Thomas, the court unanimously found that platforms are not liable under the Antiterrorism Act. Section 230, one of the more important legal provisions of the modern internet, has escaped intact. However, there are a number of interesting wrinkles to consider here, including some hints at where the next challenge to Section 230 may arise.

First, a quick recap: Back in February, the court entertained oral arguments in both cases, taking a close look at liability for choices made by internet platforms. As Kate Klonick, law professor at St. John’s University and fellow at the Berkman Klein Center at Harvard, explained brilliantly here, the narrative arc of Taamneh and Gonzalez should be understood in political context—specifically, partisan calls for the need for social media regulation

Nevertheless, the question at the heart of Gonzalez—whether Section 230 protects platforms when their algorithms target users and recommend someone else’s content—prompted a flurry of concern and an avalanche of amicus briefs discussing why this would break the internet as we know it. (In a Q&A for us, James Grimmelmann, professor at Cornell Law School and Cornell Tech, explained how disruptive this would be for generative AI, too.) Today, the court punted on the case in its opinion, saying it would be resolved by the court’s logic in Taamneh

Taamneh looked at whether Twitter’s failure to remove certain Islamic State content constituted “aiding and abetting” a terrorist attack against a nightclub in Istanbul. The court rejected Taamneh’s claim, explaining that aiding and abetting constitutes “conscious, voluntary, and culpable participation in another’s wrongdoing.” In lay terms, it has to be specific and intentional. Justice Thomas, writing for the court, reasoned by analogy to earlier technologies: “To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally.” 

Fascinatingly, this opinion was authored by Thomas, the very same justice who’s been clamoring for increased regulation of social media. Grimmelmann weighs in:

Credit:Nabiha Syed
Screenshot of a Signal conversation with James Grimmelmann, which reads in part: "My money was not on Clarence Thomas, the single biggest critic of technology platforms on the court, to write an opinion blessing the use of recommendation algorithms. … This is an opinion that takes seriously the ways in which the Internet is not like the offline world, but without slipping into hyperbole or romanticizing it. It's a clear, clean opinion that firmly repudiates the entire theory of liability the plaintiffs in these terrorist-hosting cases have tried to create."

Even more interesting? Justice Ketanji Brown Jackson filed a brief concurrence in Taamneh, noting that “Other cases presenting different allegations and different records may lead to different conclusions,” and in deciding today’s cases, the court “draws on general principles of tort and criminal law to inform its understanding… General principles are not, however, universal.” 

I sent Kate a Signal message to read the tea leaves from Justice Jackson, and what that might mean for the future of Section 230:

Credit:Nabiha Syed
Screenshot of a Signal conversation with Kate Klonick, which reads in part: "I think KBJ's concurrence is about her interest in Section 230 (c)(2) reform. So, in a reading of Section 230(c)(2) the plain text *could* be interpreted to make the platform immunity from liability conditional on the platforms acting as Good Samritans. If you accepted that reading, it would limit the broad scope of 230 and ostensibly allow platforms to be sued if they fail to act as good samaritans to platform users. But lower courts *haven't* read that meaning into the text. … In oral argument, KBJ hammered on the value of this idea… I think her concurrence reflects her openness to coming back to that argument."

So while tech industry lawyers might sleep a bit easier tonight, there’s still more to come. Stay tuned for more internet law intrigue in the coming months as we wait for the Solicitor General’s perspective in the NetChoice cases around social media regulation laws in Florida and Texas. Briefing in those cases is likely to come in the fall, and The Markup will be here to help you make sense of that.

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now