Hello, friends,
When I first started covering the tech industry, in 1996, as a reporter at the San Francisco Chronicle, I didn’t expect to be covering racial discrimination. But when I looked around at the people working in the industry, I noticed that very few of them were Black or Latino.
And so I dug into it, filing public records requests and interviewing people across the industry. The result was The Digital Divide, an investigation co-written with Laura Castaneda, who is now a journalism professor at USC Annenberg, and published in 1998. The investigation revealed how few Blacks and Latinos were employed in Silicon Valley companies and that many leading tech firms had been cited by the U.S. Department of Labor for affirmative action violations.
Reporting and writing about racial discrimination opened my eyes to realities that I hadn’t truly absorbed before. Palo Alto, the Bay Area town where I grew up, was and remains segregated. Nearly all the Black people lived on the other side of the creek from my house. East Palo Alto is still poor, while Palo Alto is awash in tech money.
Now many years later at The Markup, we continue to find that technology—which has the potential to be a tool for empowerment—often augments rather than challenges inequities.
Take, for instance, the investigation we published this week in collaboration with THE CITY: “NYC’s School Algorithms Cement Segregation. This Data Shows How,” by Markup reporter Colin Lecher and investigative data journalist Maddy Varner. They analyzed the data for the top New York City public high schools that admit students using screening algorithms and found that overall these schools admitted Black and Latino students at half the rate of White and Asian students.
The algorithms use criteria such as test scores, middle school attendance, and behavior records that critics say disadvantage poor and minority students. City officials say they have removed some troublesome criteria—like where students live—but are still working to make schools more equitable.
This story made me think about the ways tech accountability reporting of the type we do at The Markup has started to evolve into civil rights reporting. So I rounded up some of our coverage about technology exacerbating and amplifying racial inequities:
- In April, we reported that YouTube blocks advertisers from using dozens of social and racial justice terms, including Black Lives Matter, to locate YouTube videos and channels but had allowed searches for hate terms such as “White Lives Matter.” After we reached out to the company, YouTube removed almost all the hate terms but added additional social justice terms to its block list, including “civil rights” and “Black excellence.”
- In March, we reported that major universities were using race as a factor in predictive risk scores that could be used to steer Black students out of science and math majors. Texas A&M announced it was dropping the use of race in its scores after our reporting.
- Also in March, we reported that Facebook was showing official information about COVID-19 and vaccines to fewer Black people on Facebook than other demographic groups, according to data from The Markup’s Citizen Browser project. Facebook responded that our panel was not an “accurate reflection of the full breadth of people who see ads on Facebook.”
- Last August, we reported that Facebook was allowing advertisers to target employment ads by race, in violation of federal law that prohibits employers from discriminating by race and of Facebook’s own settlement with civil rights groups over the issue. A week after The Markup contacted Facebook, the company announced it was eliminating the multicultural affinity categories altogether after years of internal debate.
- Last July, we reported that Google’s ad buying portal steered advertisers toward pornographic terms when they searched for “Black girls,” a problem that UCLA professor Safiya Noble had written about years earlier regarding the company’s public search engine but that Google had failed to fix. Google removed the pornographic associated keywords after The Markup reached out for comment.
- Also last year, in a collaboration with The New York Times, we reported on the faulty algorithms that many landlords use to screen renters. One of our findings was that minority groups, where people often have fewer unique last names—such as the 12 million Latinos nationwide who share just 26 surnames—suffer outsized effects from the errors in these systems. Six Democratic senators cited our investigation 11 times in a letter sent to the Consumer Financial Protection Bureau raising concerns about the tenant screening industry, and some states are working on reforms.
And of course, our coverage has barely scratched the surface of the tech-driven inequities at play, some of which are being revealed by researchers and other journalists every day. To take just one important example: Facial recognition software has repeatedly been found to make more mistakes identifying darker-skinned faces. Joy Buolamwini and Timnit Gebru’s seminal paper, Gender Shades, found error rates in classifying darker-skinned women’s faces as high as 34.7 percent, compared with error rates of 0.8 percent for lighter-skinned males.
As technology increasingly mediates our personal and professional lives, it is becoming urgent that we identify and expose the racial biases embedded in tech algorithms.
So during this holiday weekend, as the U.S. mourns the one year anniversary of the murder of George Floyd, one of many Black people killed by police, it’s imperative to examine racial biases in tech and how we can work to rectify them as well.
As always, thanks for reading.
Best,
Julia Angwin
Editor-in-Chief
The Markup