Hello, friends,
In the wake of the Supreme Court’s jaw-dropping ruling overturning constitutional protections for abortion in the United States, there’s been a lot of discussion about how to keep data about pregnant people private.
Google announced, for instance, that it would remove sensitive locations, such as abortion clinics, from the location data it stores about users of its Android phones. Many people—including me in this newsletter—worried about whether they or their loved ones should delete their period-tracking apps.
But as Vox reporter Sara Morrison wisely observed, “[D]eleting a period tracker app is like taking a teaspoon of water out of the ocean.” So much data is collected about people these days that removing a small amount of data from an app or a phone is not going to erase all traces of a newly criminalized activity.
The Electronic Frontier Foundation notes that pregnant people are far more likely to be turned over to law enforcement by hospital staff, a partner or a family member than by data in an app —and that the types of digital evidence used to indict people are often text messages, emails, and web search queries.
So how do you protect yourself in a world of relentless surveillance? This seems like a good time to go back to the basics and understand what privacy is and why we seek it. Because it’s not just people fearing arrest who need it, but all of us.
And so this week, I turned to an expert on this topic, Neil Richards, Koch Distinguished Professor in Law at Washington University, in St. Louis. Richards is the author of two seminal privacy books: “Why Privacy Matters” (Oxford Press, 2022) and “Intellectual Privacy” (Oxford Press, 2015). He also serves on the board of the Future of Privacy Forum and the Electronic Privacy Information Center and is a member of the American Law Institute. He served as a law clerk to William H. Rehnquist, former chief justice of the Supreme Court.
Our conversation, edited for brevity and clarity, is below.
Angwin: Let’s start with a question you’ve given a lot of thought to. Why does privacy matter?
Richards: I wrote the book “Why Privacy Matters” because I kept having the same privacy conversation with people over and over again. They’d asked me, “Why should I care about privacy? Isn’t it dead?” And I kept giving them the same answer: “No, privacy is not dead, but it is up for grabs. It matters because privacy is about power. It’s not about creepiness, or control, or contextual integrity, or any of a million other things.” My book is essentially the long form of the privacy conversation.
Privacy matters because privacy is about human information. We’ve known for a long time that information is power. We know now that human information confers power over human beings. If we care about our ability to be authentic, fulfilled, and flourishing humans, we need to care about the rules that apply to our information.
I talk about privacy as an instrumental good, or something that gets us other things we care about. Privacy allows us to be humans, citizens, and consumers in ways that we find authentic and fulfilling. In the digital age, privacy is the whole ball game because so much of our society is constituted by human information. Every time we seek directions, shop, read the news, and vote (really every time we participate in modern human life) we are relying on information. If we live in an information society, then our information matters. That’s why privacy matters.
Angwin: You also have important thoughts about what privacy is not. Can you share some of them?
Richards: Yes, it’s important to explain what privacy is not because there are so many seductive but dangerous misconceptions about privacy. First, people think privacy is about hiding dark secrets, and that those who have nothing to hide have nothing to fear. This nothing to hide fallacy is dangerous because it promotes a kind of privacy fatalism and a sense that people who demand privacy must be deviant or criminal or wrong. There are certain facts about all of us (say our sexual behavior or personal health information) that we don’t want disclosed. Put another way, everyone needs privacy at one time or another, and this need for privacy is legitimate. This reasoning also misunderstands why privacy matters. Privacy is about power, not about hiding secrets.
Second, people think privacy is only about protecting people from creepy things that others are doing with their data. First, creepiness is over-inclusive; there are a lot of things that we first discover (whether we’re talking about eating raw fish for the first time or a news feed algorithm) that we might think of as creepy, but actually these things can be helpful, even great. More important, creepiness is under-inclusive. There are many things we don’t notice that can be really bad for us. Think about information practices like secret social scoring or racially biased algorithms that we don’t get to interrogate. We never see these secret data practices or whether they are “creepy”; we only see their result (that we didn’t get a mortgage or we didn’t get into that college). So creepiness is a really bad guidepost because it misses so many of the worst data practices.
Third, creepiness is malleable. Our conception of what is and isn’t creepy rests on social norms that vary among people and across different times and places. Furthermore, this conception can be shaped by powerful entities. Facebook has been very effective at this; it is effectively running a long con of trying to normalize information and surveillance practices that would have been completely unacceptable to people if they’d actually been asked to agree to them in the first place.
Angwin: Another misconception you discuss in your book is that people think privacy is the ability to control their data. Why is this misguided?
Richards: On the one hand, who doesn’t want to be in control of their data? This is how we’ve approached questions of privacy and information for literally 50 years. It’s a nice rhetorical sound bite, but the problem is that it doesn’t work. We’ve tried it for 50 years, and it’s failed spectacularly.
First, controls overwhelm you. The idea of tweaking your privacy settings might make sense if you use only one service, but most people use dozens of services, each with dozens of privacy options, so control doesn’t scale. Second, control as it’s usually deployed in our economy is an illusion. We get these nice shiny dashboards with all sorts of different buttons and sliders, but we rarely get the choices we might want, like “no targeted ads” or “only use my data to help me.” We only get the choices that don’t do violence to the business models of surveillance capitalists.
The third problem with control is that it completes the creepy trap. We see this in the context of cookie consents, where every website may have dozens of interface options letting us properly “curate” our preferences. In the end we just click “I Agree” because we just want to read the article or order the bagels; we don’t want to engage in a personal privacy audit for a single website.
But here’s the trap: Afterward, we feel guilty. We have this subjective sense that well, they did give me a choice, so I guess it’s my fault that I don’t have privacy. Of course, all along, this has been a carefully constructed exercise in choice architecture to get us to click the big button that says “I Agree” rather than the small, hard-to-find button that leads us down the rabbit hole of privacy choices. While control sounds appealing in theory, in practice it’s ineffective. It leads to the imposition of some really regressive and disempowering data practices.
Angwin: In your book you also discuss the benefits of privacy—what it provides to all of us. Can you talk about these benefits?
Richards: The most important thing to understand about privacy, in addition to its being about power, is that privacy gets us other essential things that we should care about. In the book, I talk about three of them: identity, freedom, and consumer protection.
The most basic value that privacy gives us is the space to determine our identities. In other words, it helps us figure out who we are, and what we believe in. This is why teenagers close their bedroom doors and why libraries have fought for the confidentiality of library records. It’s because when we’re watched, we act and behave differently.
And we’ve long understood from books like “1984” that privacy is essential to democratic freedom. When the government is watching, we act differently. In addition, as the Cambridge Analytica scandal revealed, human information can be a powerful tool for electoral manipulation. Developments in information technology mean that privacy is going to be even more important to sustainable democratic self-governance in the future than it has been in the past.
Finally, privacy is essential to consumer protection, particularly because the information revolution makes us ripe for manipulation. During the Industrial Revolution, the law had to respond to a new economy with regulation to protect the consumer. This included rules against deceptive advertising and other consumer protections like workplace safety and anti-discrimination laws. We’re going to need a similar set of legal protections to be sure that we can trust the digital economy and take advantage of its often magical services without being betrayed, manipulated, sorted, discriminated against, or otherwise mistreated.
So in these ways, privacy matters because it lets us authentically develop our identities as humans, it safeguards our political freedom and autonomy as citizens, and it lets us participate in and trust the digital economy as consumers.
Angwin: What do you think about the draft bipartisan American Data Privacy and Protection Act released in June?
Richards: The good news is that this bill is the strongest privacy bill that’s ever been seriously considered in the United States. In some respects, it moves past the tired proceduralism that previous proposals have advanced. It doesn’t just rest on the fiction of notice and the illusion of control to justify essentially whatever information practices a company can bury in their privacy policies.
What’s missing from this bill (and from virtually all of the proposed bills) is a real, robust duty of loyalty. A duty of loyalty is the substantive requirement that a company that receives data from a trusting consumer has to use the data in the consumer’s best interests and not in ways that are purely self-serving to the corporate bottom line. Duties of loyalty have just started to be seriously considered in privacy law, but they have worked across our law for centuries. For example, lawyers and doctors have a duty to their patients and clients.
The beauty of a duty of data loyalty is that it enables us to share data without being constantly pestered for consent. It allows us to share our data, safe in the knowledge that it will only be used to make our experiences with the service, and ultimately our lives, better. It removes the nagging fear that our information is out there beyond our control and possibly being used to harm us. A duty of loyalty would close that gap to everyone’s benefit. It’s yet another illustration of why privacy—and effective privacy rules—matter.
As always, thanks for reading.
Best,
Julia Angwin
The Markup
(Additional Hello World research by Eve Zelickson.)