Shine a light on tech’s hidden impacts. Triple your donation.
Skip navigation

2020 in Review

Algorithms Behaving Badly: 2020 Edition

Computers are being asked to make more and more weighty decisions, even as their performance reviews are troubling

Morsa Images, Tolga Akmen/AFP/Getty Images and Andrea Ucini

The perils of leaving important decisions to computer algorithms are pretty easily imagined (see, e.g., “Minority Report,” “I, Robot,” “War Games”). In recent years, however, algorithms’  job descriptions have only grown. 

They are replacing humans when it comes to making tough decisions that companies and government agencies prefer to say are grounded in statistics and formulas rather than the jumbled calculations of a human brain. Some health insurers use algorithms to determine who gets medical care and in what order of priority, instead of leaving that choice to doctors. Colleges use them to decide which applicants to admit. And prototypes of self-driving cars use them to weigh how to minimize harm during a traffic accident

Some of that computational outsourcing springs from high hopes—that computer algorithms would take bias out of the lending process, for instance, or help researchers develop a safe COVID-19 vaccine in record time. 

But it’s been proven again and again that formulas inherit the biases of their creators. An algorithm is only as good as the data and principles that train it, and a person or people are largely in charge of what it’s fed. 

Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials. 

Here are some big ones from 2020.

↩︎ link

The Racism Problem

A lot of problems with algorithmic decision-making come down to bias, but some instances are more explicit than others. The Markup reported that Google’s ad portal connects the key words “Black girls,” “Asian girls,” and “Latina girls” (but not “White girls”) to porn. (Google blocked the automated suggestions after The Markup reached out to the company. In the meantime, Googles’ search algorithm briefly sent our story to the first page of search results for the word “porn.”)  

Sometimes the consequences of such bias can be severe. 

Some medical algorithms are racially biased—deliberately. A paper in the New England Journal of Medicine identified 13 examples of race “corrections” integrated into tools used by doctors to determine who receives certain medical interventions, like heart surgery, antibiotics for urinary tract infections, and screenings for breast cancer. The tools assume patients of different races are at different risks for certain diseases—assumptions not always well grounded in science, according to the researchers. The result: a Black man who needs a kidney transplant was deemed not eligible, as Consumer Reports reported, among other disasters. 


A related issue emerged in a lawsuit against the National Football League: Black players allege it’s much harder to receive compensation for concussion-related dementia because of the way the league evaluates neurocognitive function. Essentially, they say, the league assumes Black players inherently have lower cognitive function than White players and weighs their eligibility for payouts accordingly. 

↩︎ link

Algorithms That Make Renters’ and Lower-income People’s Lives More Difficult

If you’ve ever rented a home, and the chances are you have, as renting has skyrocketed since the 2008 financial crisis, a landlord has likely run you through a tenant screening service. Whatever results the background check algorithms spit out generally constitute the difference between getting to rent the home in question and getting denied—and, The Markup found, those reports are often faulty. The computer-generated reports confuse identities, misconstrue minor run-ins with law enforcement as criminal records, and misreport evictions. And what little oversight exists typically comes too late for the wrongfully denied. 

Similarly, MIT Technology Review reported, lawyers who work with low-income people are finding themselves butting up against inscrutable, unaccountable algorithms, created by private companies, that do things like decide which children enter foster care, allocate Medicaid services, and determine access to unemployment benefits.  

↩︎ link

Policing and Persecution

There’s an enduring allure to the idea of predicting crimes before they happen, even as police department after police department has discovered problems with data-driven models. 

A case in point: the Pasco County Sheriff’s Department, which The Tampa Bay Times found routinely monitored and harassed people it identified as potential criminals. The department “sends deputies to find and interrogate anyone whose name appears” on a list generated from “arrest histories, unspecified intelligence and arbitrary decisions by police analysts,” the newspaper reported. Deputies appeared at people’s homes in the middle of the night to conduct searches and wrote tickets for minor things like missing mailbox numbers. Many of those targeted were minors. The sheriff’s department, in response, said the newspaper was cherry-picking examples and conflating legitimate police tactics with harassment. 

Facial recognition software, another policing-related algorithmic tool, led to the faulty arrest and detention of a Detroit man for a crime he did not commit, The New York Times reported in an article cataloging the technology’s privacy, accuracy, and race problems.

And in a particularly chilling development, The Washington Post reported, the Chinese tech company Huawei has been testing tools that could scan faces in crowds for ethnic features and send “Uighur alarms” to authorities. The Chinese government has detained members of the Muslim minority group en masse in prison camps—persecution that appears to be expanding. Huawei USA spokesperson Glenn Schloss told the Post the tool “is simply a test and it has not seen real-world application.” 

↩︎ link

Workplace Surveillance

Big employers are turning to algorithms to help monitor their workers. This year, Microsoft apologized after it enabled a Microsoft 365 feature that allowed managers to monitor and analyze their workers’ “productivity.” The productivity score factored in things like an individual’s participation in group chats and number of emails sent. 

Meanwhile, Business Insider reported that Whole Foods uses heat maps, which weigh things like number of employee complaints and the local unemployment rate, to predict which stores might see unionization attempts. Whole Foods is owned by Amazon, which has an elaborate apparatus for monitoring worker behavior. 

↩︎ link

Revenge of the Students

Anyone searching for inspiration in the fight against an algorithm-dominated tomorrow might look to students in the United Kingdom, who took to the streets after the education system decided to use an algorithm to give grades based on past performance during the pandemic. Or these kids, who discovered their tests were being graded by an algorithm—and then promptly figured out how to exploit it by essentially mashing up a bunch of key words. 

What did we miss? Send us tips@themarkup.org.

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now