Nabiha Syed here. Last week, I returned from a business trip to a pile of mail containing no fewer than four separate breach notification letters. You know what they look like: staid, no-nonsense letters informing you that your private information has been obtained by malcontents, and in exchange for the inevitable headaches you will now face, you’ve been given the generous gift of free credit monitoring. A Cybersecurity Awareness Month miracle!
I was promised a future with flying cars, and instead I have free identity protection services from Kroll. (No offense, Kroll.)
Unfortunately, my experience isn’t unique. According to Verizon’s 2023 Data Breach Investigations Report, ransomware, which represents nearly a quarter of all breaches, is “ubiquitous among organizations of all sizes and in all industries,” and cryptocurrency-involved breaches increased fourfold over the previous year. The latest IBM Cost of a Data Breach Report also finds that the polled breached organizations were more likely to pass incident costs onto consumers (57 percent) than to increase security investments (51 percent)—which is, um, not confidence-inspiring. (If you’d like to be horrified, here’s a running list of data breaches so far this year.)
I prefer curiosity to despair, so I picked up a fascinating book to learn more about how we got here, called “Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks.” Once I read it, I reached out to the author, Scott Shapiro, a professor at Yale Law School and the founding director of the Yale Cybersecurity Lab. Read on to learn about the first time Americans heard the word “internet,” what you can do to keep yourself safe, the distinction between “upcode” and “downcode,” and, disturbingly, something called a “god bot.”
Syed: We’re 13 years out from the day President Obama declared cybersecurity to be a top priority and established a White House office for these issues. And here we are, where the volume of global cyberattacks is up dramatically this year compared to last year. What’s happening?
Shapiro: We live in an information society where wealth, prestige, and status depend on the transfer, manipulation, and storage of information. We also live in a world where even knowledge workers don’t understand anything about how information is processed.
That’s why I wrote the book. I have a technical background, I’ve published in computer science, and I thought to myself, if someone like me can’t understand cybersecurity, something is wrong. I wanted to write something that explained: How does this work? What’s my risk? Is cyberwar going to happen? The existing books are either eat-your-vegetables compliance books, total doomsday, or for advanced hackers. That began a seven-year journey of learning and researching interesting stories about cybercrime. While people are not always interested in cybersecurity, they are fascinated by hacking and cybercrime—like true crime, but the truth is that crime is now cybercrime. And that’s a way to introduce the issue broadly.
Syed: When people hear “cybersecurity” or “hacking,” they think of “WarGames” or Angelina Jolie in “Hackers”—the focus is on code. But you challenge that in your book, with this distinction between “upcode” and “downcode” problems.
Shapiro: When you watch movies, you see zeros and ones across the screen, you see people typing really quickly. So the natural assumption is that these are technical activities with technical solutions. That’s a mistake. It’s actually a human problem that needs a human solution.
Imagine you’re sitting at your keyboard. “Downcode” is all the code below your fingertips, meaning your operating system, application, network protocols. But “upcode” is everything above your fingertips, meaning your personal habits, your beliefs, social norms, laws, professional ethics—everything that guides behavior.
And so the argument in my book is that people think they can make our lives more secure by fixing the downcode, but actually, that’s too late. Because downcode is the product of human beings following incentives set out by norms. If we want to be more secure, let’s figure out the vulnerabilities in upcode. Where are our politics failing us? Where are the bad incentives that the law gives or culture allows for developers or users?
This is good news: There are so many kinds of upcode, which means there are so many ways to intervene and make our systems more secure.
Syed: Let’s stick with that “upcode” lens, especially when it comes to industry norms and behavior. What are companies doing about rising cybersecurity threats? And what are they not doing that they should be?
Shapiro: Companies do only what they are forced to do. Cyberspace is a realm of vast legal impunity, because the law does not impose costs on many actors. There is little legal liability, so you have moments where norms do the work—like when 9/11 happened, a heightened sense of security made companies step up. I will note that President Biden in his national cybersecurity strategy called for legal liability for software vulnerabilities, which is a major deal. That creates incentives to build securely, not engage in endless patching.
Otherwise the regulatory landscape for cybersecurity is voluntary, private. Regulators have their hands tied to a large extent. Sure, the Federal Trade Commission has jurisdiction for misleading consumers, and so if something is insecure, they could go after someone for that. But the problem of insecurity is not misbranding; it is that your valuable information is not secure. Regulators need to make that happen. Unfortunately, Congress no longer seems to exist. [Note: This interview took place before Oct. 3, 2023!]
Syed: Data journalists have gained many a wrinkle worrying about the Computer Fraud and Abuse Act (CFAA) and what constitutes unauthorized access when collecting data. (We were so concerned about it that we filed an amicus brief to the Supreme Court in Van Buren v. United States). Tell us about one of the people mentioned in the book, Robert Morris, who was the first person convicted under the CFAA.
Shapiro: So on Nov. 2, 1988, a first-year graduate student named Robert Tappan Morris Jr. released a self-reproducing program—what we now call a computer worm—onto the nascent internet. He went to dinner, came back, and the whole internet crashed. He thought he was conducting an experiment on how many machines he could access, but the worms reproduced, wildly overloaded computers, and crashed them.
This was actually the first time that Americans ever heard the word “internet”—because of the Morris worm. The media needed a way to identify what this new thing was!
It’s a sad story, in the sense that Morris was charged, convicted, and had to tell his father what he had done—and his dad was the head of cybersecurity for the National Security Agency. Can you imagine? You call up your father and say, like, “I know it’s your one job to keep the country safe, and I ruined it.” At least he didn’t serve time in jail. He was given community service.
In AI We (Don’t) Trust
Chatbots like ChatGPT are trained to mimic patterns they find in how the internet speaks. What does this mean for what we trust online?
Syed: As a parent, I really feel that one. Oof. Let’s go from history to the future. So we live in the era of the rise of generative AI, there is concern and now evidence that people are using generative AI to tailor their phishing campaigns. How do you think about how generative AI changes the arms race around cybersecurity?
Shapiro: The arms race in one sentence is that you have AI as attacker, AI as defender, and then you have AI as the target of exploitation.
One thing that’s out there—and this is AI as attacker—are ways that AI can make it easy to manipulate and trick others. For example, these things called “god bots.” They’re these chatbots that claim to tell you what God will say or Jesus will say. And there are many of them now, and they’re going to be an unbelievably powerful tool of manipulation and exploitation. God wants you to pay this person in crypto, yada, yada, yada. That’s very scary.
And then there’s AI as defender—we’ve been using AI for over a decade, helping us to defend against intrusions and networks.
There’s also prompt injections, poisoning training data, adversarial machine learning—this is where AI is the target of exploitation. There are a bunch of ways attackers use to try to affect how the AI itself works.
Syed: God bots, great. I needed more nightmare fodder. On a lighter note, let’s think about proactive measures. If you’re a regular person just living in the world and subject to all these systems, what should you do to protect yourself?
Shapiro: On one hand, you can’t solve the problem yourself. You can’t avoid Equifax if you have credit, so when Equifax was hacked, everyone’s data went all over the place. Without very strong government action, there isn’t much we can do in those cases.
It’s getting worse. We live in a training capitalist society, meaning information is important because it can train neural nets. That’s the way money will be made. Companies will collect data, and since breaches are all over the place, there are no longer real reputational costs associated with a breach. The only thing that can change is if you impose liability, not just two years of free credit monitoring services.
That doesn’t mean that there’s nothing you can do. I would say the two easiest things you could possibly do are to enable two-factor authentication and to not click on any link or open an attachment from somebody you do not know. Write back to them and say, “I’m sorry, who are you?” Google them, do something. Because cybercrime is a high-volume, low-margin business—attackers do not want to bother with you if you make it a little harder for them. So that’s the good news. The bad news is, we live in a society where all our data is in the hands of people who are irresponsible.
Syed: Where do you think people should start their journey?
Shapiro: At Yale, I teach a class where students learn how to hack. My partner Sean O’Brien and I have put the entire course online for people. We go from soup to nuts—from you don’t know anything to, at the end of 12 sessions, you can do a lot.
We don’t do this because we want to create hackers. We want to create educated, secure people who can understand the adversary, know what they’re dealing with, and develop effective techniques.
Many folks, including lawyers, are already upcoders—and we’re good at it. But we need to understand the downcode too.
Have your own breach story that you’d like to share? Our friends at the Cyber Collective are hosting a community event called “Almost Got Got” that promises to be a powerful gathering. And if you’re curious about who is collecting your data while you’re browsing online, be sure to check out our newly updated Blacklight privacy inspector and add your search to the 11 million we’ve already helped.
Thanks for reading!
Chief Executive Officer