Skip navigation

Hello World

The Alarming Rise of Dangerous Speech

A conversation with Susan Benesch

Illustration showing two silhouettes of side profiles looking at each other. Speech bubbles are layered in the background. At the center is an open envelope with a pixelated cursor icon on the bottom right.
Gabriel Hongsdusit

Hello, friends, 

This week, new details emerged about how social media platforms allowed violent rhetoric to circulate freely on their platforms in the weeks leading up to the Jan. 6 insurrection. 

The news of how tech companies violated their own rules and ignored internal warnings was contained in a 122-page draft report prepared but never released by the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol. The Washington Post published the draft report this past week, writing that its release was quashed, in part because of lawmakers’ fear of offending Republicans and tech companies. 

“Major platforms’ lax enforcement against violent rhetoric, hate speech, and the big lie stemmed from longstanding fear of scrutiny from elected officials and government regulators,” the report stated. “An evaluation of the platforms’ shortcomings in responding to these threats is an essential part of examining the ongoing challenges posed by violent far-right extremism and its attempts to crush American democracy.”

Most of the social media platforms have rules against inciting violence, but the report indicates that the rules were not always enforced, and there was confusion about what kind of hate speech actually incites violence. 

But it turns out there is actual evidence of what speech incites violence. Research shows it is fear, more than hate speech, that often leads to mass violence. Leaders who seek to incite violence often create fake threats so that people will feel they must defend themselves. Hate can of course be part of the equation, but fear is almost always a key ingredient.

Susan Benesch has spent years cataloging the types of speech that have led to genocide, and she has found a consistent pattern where fear of a looming threat is used to prod groups into preemptive violence. In a recent article, she writes that dangerous speech is on the rise in the United States and needs to be countered.

Benesch is the founder and executive director of the Dangerous Speech Project, which studies speech that can inspire violence and works to find ways to prevent that violence without infringing on freedom of expression. Benesch is also faculty associate of the Berkman Klein Center for Internet and Society at Harvard University.

Our conversation, edited for brevity and clarity, is below.

Headshot of Susan Benesch
Caption: Susan Benesch

Angwin: Let’s start at the beginning. What is dangerous speech?

Benesch: It is any kind of human communication that makes people more likely to condone or even commit violence against another group of people. It can be speech, a photo, even the color of a T-shirt that convinces people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary, or even virtuous. 

I coined this term after looking at the rhetoric that malevolent civilian political leaders have used in the months and years leading up to mass intergroup violence. I was stunned at how similar this rhetoric is from case to case. It’s as if there’s some horrible school that they all attend. It made me think, If this stuff is a consistent precursor to intergroup violence, it could, at a minimum, be a useful early-warning signal for that violence. 

Angwin: There’s also a lot of discussion about hate speech. Can you draw out the distinction between dangerous speech and hate speech?

Benesch: If you attack and denigrate another group of people because of their membership in that group, that’s hate speech. But the boundaries are contested, and the definitions vary a lot. 

Dangerous speech is deliberately not subjective since the definition is consequentialist. It’s a prediction about the effect of speech as it is disseminated in the world. It is about the capacity of the speech to motivate somebody to commit or condone mass violence, which is a consequence that the vast majority of people don’t want. This means it’s easier to get people to agree that dangerous speech is bad, compared to getting people to agree that hate speech is bad, since it’s hard to agree that something is hate speech in the first place.

You can’t make a list of words that count as dangerous speech. You can only classify it with reference to our analytical framework, which asks, Who made or disseminated this content? Who received the message? What’s the content of the message itself that might make it convincing? In what social and historical context was the message disseminated? And finally, What was the means of dissemination? Those five factors can help us assess whether a message is more or less dangerous. As far as I know, there isn’t a similar framework for hate speech. 

Angwin: Can you talk about some of the common characteristics of dangerous speech?

Benesch: I mentioned earlier that I originally became interested in this topic because of how similar it was from case to case. I noticed certain rhetorical moves in dangerous speech, and I began to list them. These characteristics are striking and repeated, but they don’t define speech as dangerous by themselves.

The most well-known one is dehumanization. It’s extremely common for humans to refer to other humans as some disfavored creature that is perceived as less than human. For example, it’s Hutus calling Tutsis “cockroaches” during the Rwandan genocide of 1994. Rats and cockroaches seem to be the most common creatures in dangerous speech, I suppose because they are universally despised. If you can get people to regard another human as a cockroach, then they think, It’s O.K. to do to that person what you would do to a cockroach. It’s all about threat and fear. 

Another common hallmark of dangerous speech is called “accusation in a mirror.” That is when a dangerous speaker tells his own people that the other group is planning to attack the in-group when, in fact, the speaker wants the in-group to condone an attack against the other group. This term was coined not by me or any other researcher; it was found after the Rwandan genocide in a manual for making anti-Tutsi propaganda. Accusation in a mirror is a major feature of White supremacist language. The whole idea of the “great replacement” is that there’s another group planning to wipe you out, and they will do that unless you (which usually means the men of the group who want to see themselves as its protectors) defend yourself and your women and children. Accusation in a mirror makes violence—and people who commit it—seem noble and virtuous.

Angwin: How much dangerous speech is out there? A 2021 survey found that more than one-third of Americans agreed with the statement, “The traditional American way of life is disappearing so fast that we may have to use force to save it.”

Benesch: I’ve been working on this topic for a decade, and I’ve been very cautious ever to say that it’s getting worse or there’s an increase, since it’s hard to measure that in a robust, definitive way. But I have recently concluded that in the United States at the moment, there is, at minimum, a striking and alarming shift in the extent to which dangerous speech is used and condoned by political leaders and other influential people. 

Take the [Republican Senate candidate] Eric Greitens’ campaign ad, in which he said, “Let’s go RINO hunting.” He illustrated that with some other men all dressed up in paramilitary gear, kicking down the door of a house and blasting into it as if to massacre the people inside it. That ad was criticized and eventually taken down. But there was not a major outcry from other Republicans, who therefore are condoning dangerous speech. 

Obviously, Trump and Jan. 6 are one of the most striking and egregious examples. Large numbers of Republican leaders and far-right leaders condoned incitement to violence before, during, and after Jan. 6. They’re still doing it.

It’s not true that all of the dangerous speech in this country is coming from the right. There are absolutely examples from the left. However, the amount of dangerous speech as far as I can see, and its intensity, is much, much greater on the far right. It’s this speech, and the habit of condoning it directly or implicitly by not saying anything, that has moved into the mainstream. 

Angwin: How should we be responding to dangerous speech?

Benesch: What I really want to do is find ways of making what we now call dangerous speech less convincing and less effective without impinging on freedom of expression. 

In any society at any time, there is some small proportion of people who are extremist. But the majority of people are not extremist, and their views are moderate and change gradually. Those people are absolutely vital, so we all have a tremendous responsibility to keep people from going over the edge and becoming convinced by dangerous speech and especially dangerous disinformation.

Influential people of various spheres need to refrain from dangerous speech themselves and denounce it when other influential people use it. Wherever possible, we need to all put pressure on people who are producing and disseminating dangerous speech. 

Angwin: Do any of the big tech platforms include dangerous speech metrics in their content moderation policies? And what do you think they could do to improve? 

Benesch: No company has incorporated the dangerous speech framework whole hog into content moderation, as far as I know, but several have used dangerous speech ideas to decide where to draw the line between permissible and impermissible content, including Facebook, Twitter, Google, and Spotify. They have also used dangerous speech ideas in downranking, which is a major, often overlooked method on which platforms increasingly rely.

Here are some ideas for what they should do: 

  1. Distinguish between hate speech and dangerous speech, in part so that they can better protect freedom of speech, for example by allowing content that is provocative but not dangerous.
  2. Try to build classifiers for dangerous speech, for review by humans. It’s very important to detect it quickly, after all.
  3. Identify ambiguous content as inflammatory by observing how people are reacting to it, as I suggested in Noema magazine and in the Los Angeles Times.
  4. Recruit more people with detailed knowledge of particular cultures, places, and languages, to let platforms know when dangerous content proliferates.

As always, thanks for reading.

Best,
Julia Angwin
The Markup

(Additional Hello World research by Eve Zelickson.)

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now