Hello, friends,
Earlier this month Apple dropped a privacy bombshell. After years of touting its strong commitment to privacy, the company said it would soon start scanning photos stored on your iPhone to identify child sexual abuse imagery.
The uproar from the privacy and security community was immediate. The Electronic Frontier Foundation called it a “backdoor to your private life” that would be abused by repressive regimes worldwide. The Center for Democracy and Technology said the move “threatens users’ security and privacy.” An open letter to Apple decrying the decision has more than 7,500 signatures from leading security and privacy researchers.
Meanwhile, advocates for exploited children applauded. The National Center for Missing and Exploited Children (NCMEC) called Apple’s move “a game changer.” (In a leaked memo, the center also called critics the “screeching voices of the minority.”) Sen. Richard Blumenthal called it a “welcome, innovative, and bold step.”
Apple has tried, unsuccessfully so far, to reassure critics. In its announcement, the company declared that it would refuse any government orders to scan for images other than child sexual abuse. And in an interview with TechCrunch, head of privacy at Apple Erik Neuenschwander pledged that the company’s commitment to privacy had not changed “one iota. The device is still encrypted, we still don’t hold the key,” he said.
For folks like myself who care about privacy but also care about protecting exploited children, the extremely polarized discussion has been hard to parse. For a nuanced take, I turned to Alex Stamos, the director of the Stanford Internet Observatory, a lab that studies abuse in information technologies, who just published an op-ed about Apple’s move in The New York Times. Before joining Stanford, he was the chief security officer at Facebook, where he led the company’s investigation into manipulation of the 2016 U.S. election, and the chief information security officer at Yahoo. In 2004, Stamos co-founded iSEC Partners, a security consultancy.
The interview below has been edited for brevity and clarity.
Angwin: Let’s start with the facts. What is Apple doing, and what are they not doing?
Stamos: Apple announced three totally different child safety initiatives. I’ll talk a bit about them in order of controversy. The first is they announced that if you ask Siri for child exploitation material, or if you’re a kid who asks about being exploited, it will refer you to a helpline to get help. I don’t think there’s anything too controversial about that. I don’t believe that a lot of people are asking Siri for child exploitation, so it’s probably not really that important.
The second is they created a new mechanism that’s built into iMessage that is activated for child accounts who are part of an iCloud family. For those accounts, it looks to see whether naked photos, either of adults or children, are being sent or received by those child accounts. For an account that’s less than 13, the kid is given the opportunity to look at an image that comes in and can say yes, and then a parent is notified. If they’re 13 to 17, then it asks the question, but it does not notify the parent.
The third thing they announced was that they are moving scanning for known child exploitation images onto the iPhone. And that’s the area that I think rightfully has been the most controversial.
Angwin: Didn’t they say that they’re moving scanning onto the cloud?
Stamos: They have been very confusing. What’s happening is they have created a new mechanism to test photos on your device against a list of fingerprints that Apple builds into your device. That is only supposed to run on your photos if you have iCloud Photos turned on, but the actual testing happens on the phone.
I want to go back a little bit and talk about how other people do this. For a little over a decade, big cloud companies have scanned the images that you upload or you share with others. So if you send photos via Facebook, or if you create a Google Drive folder and then share it with somebody else, then they will scan those images to see whether they’re child exploitation. The basic technology they use is called PhotoDNA. It was invented about 15 years ago by Microsoft and Hany Farid at Cal [UC-Berkeley]. It uses a list of known exploitation images maintained by the National Center for Missing and Exploited Children.
This has gone on for about a decade, and there’s an interesting legal balance here, where the law is that if an electronic service provider—in the parlance of the law—becomes aware of child abuse imagery, they are required to report it to NCMEC, but they’re not required to look. The scanning is a voluntary thing that these companies do, but once they scan, they then have responsibilities [to report] anything they find.
Traditionally, Apple has done very little [reporting]. In the last set of numbers available from NCMEC, Facebook filed 20 million reports over 2020. Apple filed 265. Clearly nothing that is fully automated is going to create only 200 reports. So Apple has traditionally not done this, and they are somewhat catching up to the rest of the industry.
Angwin: You have said that the issue of working on child exploitation is really serious and that Apple could do more to help iMessage users report it.
Stamos: The sexual exploitation of children is almost certainly the worst thing that happens online. We talk a lot about Russian trolls and the anti-vaxxers and hate speech and election disinformation, and all those things are important, but the original sin of modern communication technologies is child exploitation.
There’s actually kind of two big categories of the exploitation of children, and Apple is targeting both, which is why they have two solutions here.
One is the trading of existing child sexual abuse material. That trading generally happens between adults who are consensually part of a criminal conspiracy. That makes it a real challenge to find.
The second category of child abuse is the abuse that has a live child involved at that moment. And that’s either because they are being abused on a platform by an adult who creates the connection or because the platform is facilitating the connection between a child and an adult.
There are a variety of different abuse types in the taxonomy, but a really important one is called sextortion, where an adult reaches out to a child and tricks them into sending a naked image either by pretending to be another kid or teenager and building a relationship or by just straight up extorting them by saying, I already have naked images of you. They do it to get more and more control of the kid and to get more and more content. That is one of the worst things I have run into in my 20 years of working on security and safety. That is one of the things that it seems Apple was thinking about with the iMessage component. What I’d like to see them do is add the ability for people to report abuse of iMessage.
So if you get a death threat on iMessage, if a woman gets an unwanted picture of male genitalia, if a child gets a request for a naked picture, there is no way to tell Apple. All you can do is block the phone number. iMessage needs to have a reporting function in the same way WhatsApp does, Google Chat does, Twitter DMs do, that effectively every other communication, end-to-end encrypted or not, has.
Angwin: The advocates are calling it a backdoor. Others have said it’s hypocritical because, in the San Bernardino case, they refused to comply with a court order to bypass the phone’s four-digit login. They said at that time that this order would be like creating a master key to open all iPhones. Is this a backdoor?
Stamos: I would not call this a backdoor, but I do believe that the way Apple has rolled out device-side scanning has created the possibility of a new type of surveillance becoming popular globally. Most of my concerns are actually outside the United States. If you look at the existing child safety framework in the U.S., the jurisprudence has actually been going against it.
But elsewhere in the world, there are already bills requiring preemptive scanning for illegality, so this might be part of the EU Digital Services Act, the U.K. Online Harms bill, and a variety of bills in India, for example.
So while I wouldn’t call this itself a backdoor, my biggest concern is that Apple has effectively opened the door to a type of searching on devices.
Angwin: Could you flesh out what it would look like if, for instance, India were to start using this capability?
Stamos: In India, the Hindu nationalist government. Narendra Modi, the head of the BJP and the prime minister, is currently in a big fight with Silicon Valley trying to suppress the speech of his political enemies and to push rules that are seen as oppressive of the Muslim minority.
India has incredibly broad laws that make speech illegal, such as laws around blasphemy that we don’t have. They have already been creating bills that would require the filtering of speech that is considered illegal in India.
One of my concerns would be that those bills will now include that phones that are sold in India have the ability to filter out that content [deemed illegal] by the government in the same way NCMEC provides child safety fingerprints to Apple.
Angwin: One of your criticisms has been that Apple didn’t work with other groups to develop this program, but just announced it.
Stamos: I think there’s four different kinds of very Apple-y things going on here.
The first is that Apple doesn’t like to work with anybody else. So instead of consulting EFF, ACLU, and various child safety groups that the companies run, they kind of went their own way.
The second is that they have traditionally kind of pretended that they don’t run a user-generated content platform, but the truth is they don’t just make beautiful thousand-dollar slabs of glass anymore. They effectively run one of the largest social networks on the planet with over a billion users, and they kind of still have this belief that they are not responsible for what happens there. They’ve never wanted to run a big trust and safety function in the same way a Google, Microsoft, or Facebook or Twitter has.
The third is that they love using cryptography to solve difficult problems, and they love technological problems.
And then the fourth is they all of a sudden care about working on child abuse. They haven’t ever, and all of a sudden, for whatever reason, probably a lot of external pressure, but maybe some internal decisions too, they care about it.
So, if you take those four things together, then this is kind of what you get.
Angwin: So what do you think Apple should do?
Stamos: I think they need to pull back and say, “We’re reopening this discussion.”
The one piece of the lemonade from lemons here is that we now have an opportunity for people to understand each other’s equities and to try to come to a safe, reasonable compromise.
What I really would like is, if you’re going to do any kind of [device] scanning, it has to be on behalf of the user. It should not be ever seen as being against the user’s interests or for the benefit of law enforcement. That should be the guiding principle for any future work here.
If they believe that sharing of photos on iCloud is a real risk for people to share child sexual abuse material—and I think that is probably an accurate belief—then they could decide not to make shared photo albums end-to-end encrypted, and they could scan them on the server side just like everybody else does. I think that is a reasonable decision to make, to say this one thing is not going to be end-to-end encrypted because of this level of risk.
My real fear is that there’s a lot of opportunity to use machine learning to keep people safe and that Apple has completely poisoned the well on this, where now you will never get the privacy advocates to accept anything and you will have a massive amount of paranoia from the general public. That is a sad thing here that we need to fix quickly.
As always, thanks for reading.
Best,
Julia Angwin
Editor-in-Chief
The Markup