Hello, friends,
In October, Facebook announced it would ban groups and pages from the QAnon conspiracy movement. But a month later we found a Facebook page called “Cue” featuring QAnon videos, as well as an ad promoting the page.
Jeremy B. Merrill’s report this week in The Markup would be almost funny if it weren’t so familiar. When Jeremy alerted Facebook, it removed the page and the ad. “Enforcement will never be perfect,” Rob Leathern, a Facebook ad executive, told The Markup.
The owner of the Facebook page, a man who identified himself to Jeremy as “Ashley,” said that he was also mystified why his Facebook page promoting the “Q movement” had not been banned.
“We managed to skim by and it’s pure luck,” Ashley said. “We don’t post anything that’s completely obvious. Videos are a little bit harder [for Facebook to catch], since they’re a little longer, so you have to watch them.”
As most of you know, there’s a tired cycle that goes like this: Tech platform allows harmful content; people harmed by the content beg for rules to ban the content; after a time, tech platform agrees; rule is immediately and continually found not to be enforced.
Here are only a few of the many examples of this:
● In 2016 Terry Parris Jr. and I reported in ProPublica that Facebook was allowing advertisers, including for housing, to target users by race. The company promised to correct the problem, but a year later, such ad targeting was still occurring.
● In 2018 Amnesty International released a report on Twitter’s failure to enforce its “Hateful conduct policy.” Although Twitter responded to the report that it was “attentive to Amnesty’s recommendations,” the organization found this year that women on the platform still face “persistent abuse.”
● The New York Times reported in 2019 that when it alerted YouTube that its system for recommendations was “circulating family videos to people seemingly motivated by sexual interest in children,” YouTube removed only some of the videos.
● And this year we reported on Amazon’s failure to crack down on peptides being sold on its site—even after we had alerted the company to the presence of peptides several months before.
To understand whether there might be a better way, I spoke to Christopher Wylie, the Cambridge Analytica whistleblower who sits on the steering committee of a Forum on Information & Democracy group that just released a report on how to end infodemics. The international group includes the journalist Maria Ressa, Marietje Schaake at Stanford, ex–Facebook investor Roger McNamee, former UN special rapporteur David Kaye, and many others. The recommendations are being submitted to a significant number of national governments.
Wylie is the author of Mindf*ck: Cambridge Analytica and the Plot to Break America. In his day job, he researches for the fashion brand H&M, but as he puts it, “My passion project is ethical tech.”
Wylie has a distinct point of view that does not represent the views of The Markup. The interview is below, edited for brevity.
Angwin: So let’s start with the 2020 U.S. presidential election. Two years ago you blew the whistle about how Cambridge Analytica harvested data from 50 million Facebook users and used it to boost the Trump campaign in the 2016 U.S. presidential election. How do you think it went this time around?
Wylie: This time around was different. In round one, people weren’t paying attention to the problem. When I first came out as a whistleblower, disinformation wasn’t a word people talked about. I think we spent the next four years trying to unpack what happened in 2016.
In 2020, although we are paying more attention to it and we have a more mainstream vocabulary for it, the fundamentals are the same. There was no national response to scaled disinformation. There were no regulatory and legal changes. So we have a lot of the same players involved, namely Facebook, who we are currently relying on to police our elections. I’m very uncomfortable with a private company taking on what I think should be a public role.
What we are also seeing is that disinformation does not have to be about politics. You see that with COVID. People are susceptible to misinformation, and that can have real harms. We’re more aware of what is happening, but we haven’t had many substantive changes.
Angwin: This report contains 250 recommendations (!) on how to stop disinformation, so we won’t get to all of them. But I’d like to start with some of the recommendations about how to build better tech. Can you explain what you mean by a “digital building code”?
Wylie: This comes from my experience with dealing with regulatory agencies and also talking to members of Congress and parliaments and noticing the difference in language being used. When I would speak to members of Congress, they were using the language of Silicon Valley—describing everything as a service.
But if you are actually working in tech, you don’t think of yourself as building a service. You think of yourself as constructing things. And construction is often governed by things like the precautionary principle and risk mitigation. So if we think of these technologies in terms of architecture and engineering, the question becomes, Why are you allowed to release something without testing for safety and testing what the harm could be?
Currently, if I wanted to release a toaster, there is a greater regulatory burden for me to prove it is safe than it is for me releasing a digital platform.
Angwin: Interesting! I called for something similar in my book, Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance, when I suggested we could follow the lead of the automobile industry by setting baseline safety standards for technology.
Wylie: It’s interesting to notice that the arguments that automakers made against air bags and other safety measures were similar to what Silicon Valley says today: that consumers are opting in and that regulation would inhibit innovation.
But the difference with autos was there was a countervailing force in insurance companies that have to pay for mangled bodies. There is no one who has to pay for mangled elections. You pay as a citizen.
Angwin: Ah, yes, let’s talk about legal liability. It is another one of my favorite topics. Cybersecurity experts such as Bruce Schneier have suggested that liability could lead to safer software. Tell me about your plans for creating liability for software.
Wylie: There is a section in the report about introducing the concept of software malpractice and that if you don’t live up to a minimum set of standards that are articulated in the law—whether it’s safety or quality or what have you—you could be liable for malpractice.
I like to think about it using this logic of construction or architecture. Imagine if you had a building and it had an arsonist in it. If you are an architect, you are not liable for the actions of the arsonist. But you would be liable if you didn’t put enough fire exits in the building or you used extra flammable paint.
What is interesting about looking at it through professional standards is that you can sidestep the debate about section 230 and censorship. Instead, we can just say that if the way you design a platform causes harm, even if that harm was originally started by a user, if your design was unreasonable or unsafe, you should be liable.
If you look at what happened in Myanmar, the fact that there were virtually no consequences for a platform used in crimes against humanity is outrageous. Imagine if we could give a voice to people around the world to help create minimum standards.
Thanks, as always, for reading.
I’ll be taking next week off for the Thanksgiving holiday but will be back in your inboxes on Dec. 5.
Stay safe,
Julia Angwin
Editor-in-Chief
The Markup