Hello, friends,
Imagine for a moment an alternate universe in which Google employed humans to sell ads alongside its search results. And imagine if an advertiser asked Google’s salesperson, “What keywords should I advertise near if I want to reach Black girls?”
It’s pretty unlikely that the salesperson would reply with a bunch of pornographic terms. But that’s exactly what Google’s automated algorithms were doing until we alerted the company.
This week, The Markup reporters Leon Yin and Aaron Sankin revealed that Google was showing primarily pornographic results for “Black girls” in its ad portal called Keywords Planner. It showed no suggested keywords at all for searches for “White girls.”
Google spokesperson Suzanne Blackburn told The Markup it was a mistake. “We’ve removed these terms from the tool and are looking into how we stop this from happening again,” she wrote in a statement emailed to The Markup.
This isn’t the first time Google has claimed this problem was fixed. In 2012, Safiya Umoja Noble published an article, “Missed Connections: What Search Engines Say About Women,” lamenting that when she searched for “Black girls” on Google, the first result was SugaryBlackPussy.com.
Soon after Noble’s article appeared, the pornographic sites quietly vanished from Google search results for “Black girls.” Noble went on to write a book about it, “Algorithms of Oppression: How Search Engines Reinforce Racism” (NYU Press, 2018). But our reporting indicates that somehow Google didn’t fix the underlying issue in its algorithms.
Google’s porn problem is only one of many examples of Big Tech algorithms making egregious mistakes. Consider, for instance, Google’s image recognition algorithm that tagged black faces as gorillas. Two years after that was disclosed, Google still had only patched over the problem—by blocking any results for gorillas at all.
Or consider Facebook’s ad targeting algorithm. The U.S. Department of Housing and Urban Development sued Facebook last year, alleging that its algorithm considered age and race when targeting ads for housing. In Facebook’s Civil Rights Audit, which it recently released, auditors said that Facebook has not acted with enough “urgency” to address the discrimination concerns in its ad system.
To help understand why we keep encountering racist algorithms and how we might fix them, I spoke to Ali Alkhatib, a research fellow at the Center for Applied Data Ethics at the University of San Francisco, whose training as both an anthropologist and computer scientist gives him a unique perspective.
The interview is below, lightly edited for brevity.
Angwin: Your paper, “Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions,” describes how algorithms are constantly out of step with cultural norms. Can you explain in layman’s terms how this happens?
Alkhatib: The paper draws on the idea of street-level bureaucrats, the people on the street who decide whether to enforce a policy. For example: a police officer deciding whether to arrest someone for eating on the train or a judge allowing someone out on bail because they are a “good egg.”
Algorithms aren’t making the same nuanced calls. The Markup’s story about Google and Black girls shows how Google just put a bandage over the superficial issue but still continued to show all of these highly sexualized and prejudicial results—which implies the algorithm hasn’t learned anything.
The reason that street-level algorithms are not making the nuanced calls that street-level bureaucrats make is because they lack the sensitivity to understand new or novel things. The algorithm not only doesn’t learn when it makes a wrong decision, but it also takes a whole loop and a half for the wrong decision to be labeled as wrong and the algorithm to be retrained.
That makes it basically impossible for algorithms to keep up with culture because they are not learning and changing as culture updates.
Angwin: But aren’t street-level bureaucrats also biased?
Alkhatib: Absolutely. Street-level bureaucrats are not a panacea. They are the sources of a lot of harm. But we have a way to avoid that: We make government participatory. People recognize that we have a job to do in electing politicians and showing up to local governance meetings because that’s how we push back on abusive policing and racist zoning.
We care about these things because we recognize them as politically consequential.
But we should be thinking about algorithms as politically consequential as well. Currently Facebook doesn’t have a social contract with its users. If they want to run a system that is unfair, it’s not the same as if a senator says, “I won’t vote for the interests of my constituency.”
Angwin: So are you calling for the tech companies to have a social contract with their users?
Alkhatib: What I’m calling for is a radical rethinking of our relationship with the algorithms that have an impact on our lives.
Algorithms should have political responsibilities. We should be able to interrogate and challenge the decisions that are made. If we can borrow from the idea of street-level bureaucracies to talk about “street-level algorithms,” then we might be able to borrow from the ways society keeps street-level bureaucrats from running amok.
One tool is oversight to audit someone’s history of decisions to uncover bias. Looking over a judge’s sentencing decisions, or a police officer’s record of misconduct and violence, or getting internal records surrounding a controversial decision.
But that’s only part of it: Once we’ve identified the harm, do we have established mechanisms to change the situation?
Until Congress stands up for the public’s right to interrogate and challenge the Big Tech algorithms that increasingly impact every part of our lives, we at The Markup will continue boring into those black boxes to expose what’s going wrong.
As always, thanks for reading.
Best,
Julia Angwin
Editor-in-Chief
The Markup