One of the creepiest innovations of our digital age is the ability to collect thousands of data points about people’s online behavior, make inferences about what they want to hear, and then send them targeted messages.
Advertisers were the earliest adopters of this technology, developing those ads that follow you around on the internet hawking shoes that you already bought.
But now state actors have adopted the technique to promote propaganda. During the 2016 U.S. presidential election, Russia famously used Facebook microtargeting in an attempt to influence voters. And during the recent Russian invasion of Ukraine, Facebook and Twitter have identified and removed multiple accounts that were spreading pro-Russian disinformation.
The threat is now large enough that Gen. Paul Nakasone, director of the U.S. National Security Agency, declared in September that foreign influence operations were “the next great disruptor” for the intelligence community.
To understand the role that microtargeting plays in information warfare, I spoke to Jessica Dawson, a research scientist and Army major who leads an information warfare team at the U.S. Army Cyber Institute.
Last year she published a paper, “Microtargeting as Information Warfare,” arguing that microtargeting has created new national security concerns that require the government and military to do more to protect servicemembers and the American people from foreign manipulation. “What started as a way for businesses to connect directly with potential customers has transformed into a disinformation machine at a scale that autocratic governments of the past could only imagine,” she wrote.
Dawson has also co-written a forthcoming paper analyzing how Russia used Twitter to enable its influence operations. The paper concludes that Russia uses “military and patriotic narratives and profiles to wrap anti-government sentiment in patriotic trappings and to set the stage for Americans to engage in armed domestic conflict.”
Our conversation, edited for brevity and clarity, appears below. Dawson emphasized that the opinions expressed are her own and do not represent the Department of Defense or the Department of the Army.
Angwin: Can you tell us what information warfare is?
Dawson: It is the control and manipulation of information, including advertising and propaganda, in order to influence populations to come around to your perspective. What our team is focused on is understanding how this works in the new social media environment and understanding how to defend ourselves in this space. We don’t want to learn how to do it better; we want to learn how to defend the American soldier against information manipulation.
Angwin: I’m guessing that the rise of social media has drastically increased the amount of information operations?
Dawson: Yes, very much. We’ve never before had the ability to reach out and touch every individual with a personalized message at scale. There’s always been broad marketing trends to try to target different groups and populations, but now, with all of the data collection that’s going on in the surveillance economy, the ability to microtarget either individuals or small segments of a population is unprecedented.
I think it’s a safe assumption that every one of us, when we log on to different social media platforms, is receiving a very different version of that platform. One of the things that was really interesting to me about Chris Wylie’s book, “Mindf*ck,” was this idea of who’s persuadable and how we measure this. What I’ve come to conclude is that we’re all persuadable on something. We’re all going to be vulnerable to messaging that’s targeted just the right way. While there are times when people are more vulnerable, like with significant life events such as births or deaths or divorces, at the end of the day, we’re all susceptible to the right message.
Angwin: How are targeted advertisements related to propaganda and misinformation?
Dawson: Advertising is the original form of propaganda in many ways. We’ve all clicked on that shoe or that face cream and seen it follow us around the web. Well, propaganda and misinformation can do the exact same thing. It’s just as creepy in a lot of ways, but it may be even harder to detect because we don’t actually know what triggered what we’re seeing.
All of this data is being used to develop inferences about us, and we have no way to know what those inferences are, and whether they’re accurate. These inferences can actually shape our lives in substantial ways, such as access to credit or housing, for example. Companies are gathering all of this information from our everyday activities online and developing models that lead to inferences, and there is virtually no regulation over either the data they collect or what they do with it. There’s no transparency about why you are being shown something. We need to know more about what is going on in the information environment to protect ourselves.
Angwin: There has been a move toward banning behavioral targeting. Europe is considering it, although it is not clear if parliament will follow through with its latest legislation. There’s also a bill in Congress that would ban surveillance advertising. Do you think we should ban behavioral advertising?
Dawson: Yes, of course. (This is my position, which is not representative of the Department of Defense’s policy.) Shoshana Zuboff uses the term instrumentarium power to describe how information collected about you online does not actually work in your best interest. Companies might say they’re using it to make things more convenient for you, but at the end of the day, these companies don’t have a vested interest to work toward anyone else’s benefit other than their own. There needs to be transparency, and there need to be enforcement mechanisms to define ethical boundaries around this type of online targeting.
Angwin: You said that you’re working on defense, so what can we do to defend ourselves, since there is no law preventing microtargeting?
Dawson: It’s actually really hard to stop data from being collected on you and stop the surveillance and targeting of advertisements and information. I’ve been using an app that gives you a chronological feed on Twitter as opposed to a targeted one, and it is a radically different experience. I’m seeing folks that I haven’t seen in a while because the algorithm just hadn’t pushed their content into my feed, so it’s kind of nice. I am also a proponent of keeping your kids off of social media for as long as humanly possible. My children joke that they have “Amish internet” because they don’t have Safari on their phones. But having tried and failed to limit harmful content, prohibition while they’re young has been the only way I’ve found to limit the harms. But I can’t stop data from being collected on my kids when their schools are using ed tech companies that are amassing significant troves of data on our children with, again, very little to no oversight.
It’s also important to just be conscious of anything that you’re seeing, especially if it triggers a strong emotional response. You should ask, why am I seeing this? What’s the rest of the story? If I start to notice that everything is coalescing around a certain idea, I think, “There has to be somewhere out there where I can find other information.” In the Army we have a saying: First reports are always wrong. The outrage temptation is real, and the algorithms are designed to trigger that.
It’s really hard, and that’s why protection at scale will require more than any individual effort. We really need a collective effort with policy to help protect society in this space.
Angwin: What do you think about the information war happening between Ukraine and Russia?
Dawson: I want to reiterate that these are my own observations, but it seems like in the information fight, Ukraine really appears to be dominating. When we look at the very early stages of the invasion, Ukraine had some things go viral very early on that were quite powerful. I was shocked at how quickly they seemed to get almost the entire world to coalesce on their side. I’ve never seen anything like that in my entire life. For example, when [Ukrainian president Volodymyr] Zelenskyy opted not to leave, he said, “I need ammunition, not a ride.” That was a powerful narrative that he was communicating to both global leaders and to his own people that he was with them, that he shared the risk and he was willing to fight to defend his country.
Winning the information war is important; it is the ability to gain support on the international stage, the ability to gain actual ground support through ammunition, weapons, and all of that. Ukraine appears to be doing a really good job of leveraging the information fight in this way.
However, one thing to keep in mind is that once the shooting starts, the information fight seems to become less important to those on the ground. For the folks that are suffering in cities that have been leveled, the information fight is almost irrelevant—other people can argue about whether what they see is real online, but you know if you’re being bombed.
Angwin: Many big tech companies have taken action against Russia: Facebook blocked Russian state media from advertising or monetizing on its platform, YouTube suspended advertising from Russian state media, and Snapchat halted ad sales in Russia. Do you think this is an effective way to fight disinformation?
Dawson: I think labeling state media, especially state media that’s not up to journalistic standards, is an important piece of the information warfare puzzle; however, blocking content and pulling out of the Russian ecosystem before Russia brought the hammer down only enables Russia to further control its own population. The Soviet state controlled information very tightly, and the current Russian state has been doing the same thing, so anything that we can do to help get information into those ecosystems is important in terms of winning the information war.
Angwin: Why do you see data collection and targeted advertising as a threat to our society and security?
Dawson: What we’re seeing in the information space is that we’re all sliced and diced into these atomized individuals, which means we’re not necessarily cohering as a cohesive group. We’re just a bunch of individuals doing our own thing. It’s a long-running debate in sociology about whether society is just the sum of its parts or if there is something that draws us all together into a collective whole. I tend to think that all of us running around as atomized individuals becomes a problem when we think about collective action. Some of the most important tasks require us to come together to do hard things collectively. One example is democracy, which requires that we see each other as part of a cohesive whole—not that we have to all agree, but that we see each other as members of a community with shared interests. Reducing us all to individuals seems to be making us more selfish and self-interested.
My big concern is that we are losing the fight to define who we are and what we all agree on. In other words, what our first principles are. For example, I might not agree with your position on something, but can I still respect you as a fellow citizen?
Microtargeting is keeping us in such a rage that we can’t calm down and reasonably decide to compromise on things. It’s eroding our ability to come together and engage in this thing called “we the people.” I don’t see how we can have this continued surveillance and it not get weaponized against us to a greater extent because it’s not designed currently to be good for society.
As always, thanks for reading.