Shine a light on tech’s hidden impacts. Triple your donation.
Skip navigation
Image of Donald Trump looking at the camera from behind a pile of Facebook flags added to his posts
Sam Morris/Tasos Katopodis/Getty Images

Citizen Browser

Trump’s False Posts Were Treated with Kid Gloves by Facebook

Data from Citizen Browser shows how rare it is for a post to be called “false”—especially if you’re the president

Sam Morris/Tasos Katopodis/Getty Images

As users drifted through Facebook in the aftermath of the presidential election, they may have run across a satirical article about the Nashville bombing in December. Playing off conspiracies about COVID-19 death diagnoses, a viral photo jokingly suggested the bomber had “died from COVID-19 shortly after blowing himself up.”

In the beginning of January, a New York woman was shown the photo—shared by a friend—in her news feed. Facebook appended a note over the top: The information was false.

But four days earlier, as a woman in Texas looked at Facebook, she saw the same post—shared to her feed by a conservative media personality with about two million followers. That post only had a “related articles” note appended to the bottom that directed users to a fact-check article, making it much less obvious that the post was untrue.

In August, as the election approached and misinformation about COVID-19 spread, Facebook announced it would give new fact-checking labels to posts, including more nuanced options than simply “false.” But data from The Markup’s Citizen Browser project, which tracks a nationwide panel of Facebook users’ feeds, shows how unevenly those labels were applied: Posts were rarely called “false,” even when they contained debunked conspiracy theories. And posts by Donald Trump were treated with the less direct flags, even when they contained lies.

The Markup shared the underlying data for this story with Facebook.

“We don’t comment on data that we can’t validate, but we are looking into the examples shared,” Facebook spokesperson Katie Derkits said in a statement. 

Overall, we gathered Facebook feed data from more than 2,200 people and examined how often those users saw flagged posts on the platform in December and January. We found more than 330 users in the sample who saw posts that were flagged because they were false, devoid of context, or related to an especially controversial issue, like the presidential election. But Facebook and its partners used the “false” label sparingly—only 12 times.

↩︎ link

Facebook has spent years grappling with how to fact-check content—especially when the posts come from politicians. In a 2019 blog post, the company argued that it wasn’t “an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience.”

“My guess would be that Facebook doesn’t fact-check Donald Trump not because of a concern for free speech or democracy, but because of a concern for their bottom line,” said Ethan Porter, an assistant professor at George Washington University who has researched false information on the platform. 

After years of controversy, Facebook indefinitely suspended Trump, who had 150 million followers, after the riot at the United States Capitol on Jan. 6, saying the risks of keeping him on the service were now too great. But that could change. The Facebook Oversight Board, a body created by Facebook to review and possibly overturn its decisions, will determine whether to reinstate the former president’s account. 

The company says it removes at least some outright false posts, and our analysis can’t account for how many false posts our panel would have seen without that action. Our sample is also small relative to Facebook’s universe of billions of users who may have seen additional flags on their feeds. 

Within our panel, however, a clear trend emerged: Fueled by Trump’s false claims, our data showed, more flagged content overall went to Trump voters. Among our panelists who voted for Trump, 9.3 percent had flagged posts appear on their feeds, as opposed to only 2.4 percent of Biden voters. Older Americans in our sample were also more likely to encounter the flagged posts. 

Porter said Facebook has been too passive in how it handles lies—a flaw that undermines the entire platform. 

“They’ve chosen not to aggressively fact-check,” Porter said. “As a result, more people believe false things than would otherwise, full stop.” 

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now