Hello, friends,
Today marks 20 years since al-Qaeda attacked the United States by weaponizing commercial aircraft to kill 2,977 people.
Sadly, the legacy of 9/11 is more than the lives lost that day. In reaction to the attacks, the U.S. launched the so-called war on terror—an amorphous battle against an ephemeral and evolving target that spanned from Afghanistan to Iraq to Yemen and other corners of the globe. The death toll is estimated at nearly one million people.
The war on terror also emboldened the U.S. to engage in mass surveillance; racial, religious, and ethnic profiling; indefinite detention; torture; and assassinations. That’s hurt America’s reputation as a champion of human rights and helped terrorist groups recruit more members and is well-chronicled in the Brennan Center for Justice’s 9/11 retrospective, “The True Costs of National Security.”
A defining feature of the war on terror has been drone strikes—unmanned, remotely piloted aerial vehicles that drop munitions—that have often mistakenly killed the wrong people. Increasingly, drones are being built that do not require a human pilot and can use artificial intelligence to identify targets.
The rise of lethal autonomous weapons systems (LAWS) that can make their own decisions about who to kill is truly one of the most terrifying legacies of the past 20 years. Proponents of the technology argue that autonomous weapons can help in places where a drone is not able to communicate with its base, and its human operator. Critics say that humans should always be in charge of killing decisions.
To better understand the concerns about autonomous killing machines, I spoke with Liz O’Sullivan, who in 2019 publicly quit her job in protest over the unwillingness of her employer, Clarifai, to pledge never to contribute to the development of lethal autonomous weapons systems. She wrote an influential public letter at the time calling for autonomous weapons to be “banned to the same degree as biological ones.”
Liz is the new CEO of Dr. Rumman Chowdhury’s Parity, a platform that automates model risk and algorithmic governance. Previously, Liz was the first technology director of the Surveillance Technology Oversight Project and co-founder of model monitoring platform Arthur. She is a member of the International Committee for Robot Arms Control, where she advises the Campaign to Stop Killer Robots on all things AI.
The interview below has been lightly edited for brevity and clarity.
Angwin: How did you get into the field of autonomous weapons?
O’Sullivan: The scariest and deadliest field of all time?
Angwin: Yeah, how did that happen?
O’Sullivan: People ask me this a lot. You have to understand that I grew up in a science fiction family. “Star Trek” was always on, and reading [science fiction writer Isaac] Asimov was the family pastime. We’ve been, as a species, thinking about and talking about this moment for as long as fiction has existed.
When I went to work at a computer vision company, my first impression was, “Wow, this technology is magic, it’s bonkers cool. It’s going to change the world.” I spent the next two years playing with state-of-the-art computer vision. My team was responsible for collecting data, managing that data, labeling it, sending it around the world to various annotation teams, and then getting it back and trying to train models with as little discriminatory bias as possible.
The ways that it failed were always crazy and unpredictable in ways that no human could have ever seen coming. When we asked the developers and data scientists to explain, for instance, why this machine thinks this baby is a bicycle, no one could ever answer that.
Over the course of my time at that company, the opportunity to work with the military had arisen.
At first we didn’t know what that contract really was. We knew it was a government contract. We knew it was very lucrative. The people who were directly working on the project were separated from us in a room with a closed door, with a blocked window, surveillance cameras in the ceiling. Everybody had spyware put on their laptop. They no longer came to eat lunch with us. They no longer went out with us.
Eventually we learned its name, Project Maven, and found its website, where you could see very clearly that it was targeting the Middle East, and that it was drone photography.
So we started talking. We had open discussions about working with the military. People were sharing their concerns and wondering what the real danger here was, not naively, but very sincerely and with great care. [We found] there were prohibitions claimed by the military stating that the U.S. would never arm robots and humans would always be in control of these machines. But there were also papers being released from military organizations saying this is something we’re experimenting with. We later learned that these prohibitions were merely cosmetic, and the U.S. retained the right to move forward with whatever degree of autonomy it saw fit.
I had originally thought that the problem of autonomy in warfare was a problem of the distant future, until it hit me all at once that crude versions of this weapon were possible to create with current technology. I knew something needed to be done about it. So I came forward. I wrote an open letter to the CEO asking if they would make that promise and sign the pledge that the Future of Life Institute had put forward. But when they refused to sign that pledge, I quit my job and I became an activist.
I started working with the Campaign to Stop Killer Robots. They invited me to the United Nations for my first public speaking engagement about the topic, which was a thrill and frustrating in its own way. And I joined the International Committee for Robot Arms Control with Peter Asaro and Noel Sharkey to try to raise awareness that this is not a tomorrow problem, it is a today problem, and to try to stop it before we open Pandora’s box.
Angwin: There’s been a lot of discussions at the UN and at other levels, but what has happened since then? What actually has occurred?
O’Sullivan: The forum for this discussion is in the UN Convention on Certain Conventional Weapons, the CCW. It’s a treaty-based body that has successfully banned a couple of technologies in the recent past, including cluster munitions and blinding lasers. But this topic is more contentious in ways that I could never have predicted.
I grew up with a very patriotic family, so I was shocked to find out that one of the major drivers behind stopping humanitarians from enacting the ban was actually the United States of America. We typically assign the blame on this to Russia and China, and that’s not wrong. But the degree to which Western powers are driving this forward is surprising and scary.
There is, however, a ton of support for the movement. According to Laura Nolan, who represented tech worker views at the CCW this year, a growing coalition of countries are supportive of a new international regulation or some sort of treaty or prohibition on the use of this weapon. But consistently it’s been the United States, Russia, China, Australia, and Israel who are steadfastly standing in the way. It’s so telling to me that the divide here is between the countries who can afford drone armies versus the opposing groups, like Palestine, who fear that they’ll become the targets. This year we had a new addition to the coalition against a prohibition, which was India—surprising in some ways, but also perhaps the result of the new Quad Alliance between Japan, Australia, India, and the U.S.
This is the eighth year that we’ve had a CCW, and very little has happened. The Campaign is starting to think about undertaking methods that go outside of this treaty.
Angwin: What are you seeking in this treaty?
O’Sullivan: We want to make sure that humanity retains meaningful human control over the application of force in all contexts.
We want to prohibit the use of these kinds of weapons against humans. We want to prohibit them from being used in inhabited areas. That’s sort of the bare minimum, and that’s the result of a ton of negotiation and consensus building.
We don’t want facial recognition or identity-based or ethnicity-based drone attacks. I think one of the worst-case scenarios is ethnicity-based genocide, and that would be easily empowered by this technology. After all, that’s what AI does: It classifies things into groups. It’s very good at saying, this thing is different from this other thing in some visually identifiable way. When you apply that to people, it breaks down. You get a situation where an algorithm says let’s treat this group of people differently from another. That can be either give them a loan, hire them, fire them, or it can be kill them or don’t kill them, which is why this is so important, because the stakes are so high.
And importantly, we don’t want to outlaw rocket defense systems as part of the ban, which has long been a sticking point. Using an autonomous drone to defend against an incoming missile has its own set of problems, of course, but they are stationary, they don’t target individuals, and they are limited in scope and scale. They are also closely supervised; there’s always somebody watching—a human in the loop.
Angwin: What does the autonomous weapon threat look like today?
O’Sullivan: There are lots of ways to define a fully autonomous weapon, which is one of the major sticking points in the treaty [discussions], but we’d like to focus on machines that can use sensor input to select and destroy their own targets, especially if they’re mobile.
The current state of autonomous weapons is that they will be brittle and faulty and fail in unpredictable ways. We worry about things like the drone mistaking civilians for insurgents. We worry about hacking attempts where a whole fleet of drones could be co-opted into the enemy’s camp. We worry about the ease with which this allows nations to go to war and the relative expense.
This will fall disproportionately on underdeveloped nations and the public of authoritarian countries, which has been confirmed in that the first supposed use of autonomy mode happened a couple of years ago in Libya. Turkey deployed them against retreating militia soldiers. They will destroy not just people, but buildings and infrastructure. The long-term risk is that war at machine speed will be impossible to defend against for the individual.
Angwin: What do you say to the contrarian argument that humans are already missing their targets with drones, so machines will be better?
O’Sullivan: We hear that a lot. First and foremost, it’s a faith-based argument. I’ve never seen any evidence that computers are better at identifying human qualities compared to people. Intent is not something that you can infer from the exterior.
I can tell you the images that we’re talking about that these drones collect are extremely low resolution and small, and that may impact the ability to infer anything legitimate about the photo itself, much less with any degree of granularity. Even if we grant that you can enhance accuracy by some small percentage, which I’m still not convinced of, then we have to also add scale to that notion.
Because we aren’t sending one autonomous drone at a time; we would be sending a swarm of a thousand. The degree of damage that’s possible is higher, and so is the possibility for accidents.
I think people don’t really understand how close this reality is and how much power they have to prevent it from happening. Just last year, the issue gained enough momentum that we saw our first presidential candidate, Andrew Yang, take a stand against them on the campaign trail. Our governments are the ones driving this negotiation, and they need to hear from us, because there’s still time to prevent the proverbial robot apocalypse.
I’m genuinely worried about the degree to which we’re giving control over machines in every aspect of life and expecting that they will do things that we don’t program them to do. Because we are the ones creating this technology, and we have the decision to arm them or not arm them. And if we decide to arm them and something goes wrong, that’s on us.
As always, thanks for reading.
Best,
Julia Angwin
Editor-in-Chief
The Markup