Hello, friends,
The wail of ambulance sirens—a constant just weeks ago—has been largely replaced here in New York City with the sound of police helicopters hovering over predominantly peaceful protests.
One reason we have reached this moment of reckoning for police brutality and systemic racism is online social networks. For all of their flaws, they have also allowed more than just white male establishment voices to be amplified. Movements like #BlackLivesMatter and #MeToo began on social media. But those same platforms also allow our president to spread misinformation, conspiracy theories, and threaten violence against people exercising their First Amendment rights.
This week in our Ask The Markup, reporter Sara Harrison examines the U.S. law that enables both the good and the bad of internet speech—Section 230 of the Communications Decency Act—and how it might be reformed. The law, sometimes heralded as “the ‘Magna Carta’ of the internet,” was passed in 1996 to provide websites with incentive to delete pornography, but it has since evolved. It is now effectively a shield websites use to protect them from responsibility for all sorts of activity on their platforms, from illegal gun sales to discriminatory ads.
To understand the law—and how it could be improved—I interviewed Danielle Citron, the leading legal scholar in the emerging field of cyber civil rights. Citron has worked for nearly two decades to find legal and social strategies to combat the cyber harassment and invasions of sexual privacy that women, sexual minorities, and people of color disproportionately experience online.
Citron’s work is a major reason there are laws banning “‘revenge porn” in 46 states, the District of Columbia, and Guam. And she played a big role in persuading Facebook to set up a program that allows women to preemptively stop nonconsensual sexual images of themselves from circulating on its platform. Citron is a professor of law at Boston University, a MacArthur Fellow, and author of the book Hate Crimes in Cyberspace.
Citron has a distinct point of view that does not represent the views of The Markup. The interview is below, lightly edited for brevity.
Angwin: You have written that Section 230 needs updating. How has it fallen behind the times?
Citron: The law made sense in the world of bulletin boards. It was the first time that people could reach other people they didn’t know and lived far away from. Some took the opportunity to say crazy defamatory things, post porn, and the like. The sites wanted to monitor destructive content but didn’t want to be liable for defamation as they might be if their moderation was treated like the editing of a publisher. So the question for lawmakers was how to incentivize these platforms to filter dirty words, porn, and abuse.
But this statute was passed at a time when we didn’t have revenge porn sites. We didn’t have sites whose raison d’être was criminality. We didn’t have the business model of data collecting, tracking, and advertising. We didn’t have sites trafficking in illegal gun sales and nonconsensual porn.
There are sites whose business model is abuse, and those sites enjoy that immunity. If the people engaging in illegality are not the website operators, then the website gets to make all the money without any liability.
Section 230 creates a great disadvantage to women and minorities who are disproportionately targeted by all kinds of online mischief, and it allows sites to make money off their suffering.
The law needs updating. It needs reform. It is not any longer incentivizing responsible practices by platforms.
Angwin: You have proposed that Section 230 could be updated so that immunity is only available to websites that take “reasonable steps to prevent or address unlawful third-party content that it knows about.” How would that work?
Citron: First of all, the reason I like “reasonableness” is that courts handle it well. Reasonableness is a legal standard that courts employ in tort law, statutory law, and criminal procedure. What would constitute reasonable at social media platforms is having content-moderation policies, having channels of accountability, and having fairly responsive reporting practices.
Twitter’s policy, for instance, bans threats, but when it comes to public officials, it assesses whether it’s in the public interest to keep it up. The reasonableness policy wouldn’t look at whether Twitter took down a particular tweet. But it would look at whether their policies vis à vis threats and public officials are reasonable.
Reasonableness allows room for evolution. I always think of nonconsensual porn images as an example. Victims wanted the ability to contact Facebook and say, “I’m really worried that my partner will post something.” Facebook responded [with a program that allows women to send in photos they were worried about and proactively ban them from the platform]. Facebook got grief for that, but it is a great example of how what is reasonable today can change over time.
But it’s worth noting that I hate the idea that Facebook is not going to address falsehoods in political ads and it is allowing the microtargeting of political ads without any transparency or accountability. I told them that if they keep on with that policy, they are endangering democracy.
Failing to provide transparency and accountability around ads that cause clear harm (whether misdirecting voters on election day, inciting violence, or spreading harmful defamation) is unreasonable. Allowing illegality that causes clear harm to fester and thrive without any channels of accountability is unreasonable.
Angwin: What do you think of President Trump’s executive order asking the Federal Trade Commission and the Federal Communications Commission to examine whether social media platforms are complying with Section 230 and their own terms of service?
Citron: The order is not really an order in the sense of applicable law. It tells us the executive branch’s flawed interpretation of Section 230, which I imagine they are hoping courts will take seriously. Courts can and will ignore its advice.
It is only an order to the extent that it instructs the FTC and the FCC to bring cases against platforms that are acting deceptively because they are betraying their terms of service. That’s a waste of their time. I think it’s just a dog of an argument. They may bring these lawsuits and courts will strike them down on 230 grounds as they have in other contexts.
And if the president would take away the immunity from deceptive practices, that would mean companies will refuse to do any moderation at all. They will abandon having terms of service. So companies just wouldn’t moderate. We would be overrun with spam, hate speech, nude photos, and rape and death threats, and the platforms couldn’t do anything but wait until lawsuits were brought.
If we are going to fix 230, let’s not squander the opportunity with terrible ideas.
Thanks as always for reading, and stay safe.
Best,
Julia Angwin
Editor-in-Chief
The Markup