Skip navigation

Hello World

Apocalypse Not

Why do tech people love talking about the end of the world?

15th century Gothic painting of several naked European people being terrorized by demons
PHAS/Universal Images Group via Getty Images

Hi everyone, Michael Reilly here. 

It’s still early in 2024, but is anyone else exhausted by the levels of drama and hyperventilation surrounding AI? One thing that’s been particularly striking for me is some people’s willingness to invoke catastrophe—actual apocalyptic visions of our future—to describe their spiffy chatbots and image generators.

I’m thinking in particular of the recent tiff on Twitter between Vinod Khosla and Marc Andreessen, two of Silicon Valley’s most prominent venture capitalists, who are also both investors in OpenAI. What started as a discussion over whether it was a good idea for OpenAI’s proprietary code to be open sourced, somehow found its way to comparing the company’s products to the Manhattan Project.

Khosla’s point appeared to center around national security concerns for open sourcing very sophisticated AI code to anyone who might decide to misuse it. But this kind of apocalypse-speak is in the founding DNA of OpenAI, a company that began as a purposeful effort to bring about a computer-based superintelligence that would eclipse the human intellect. In late 2022 OpenAI dropped ChatGPT and the fanfare really took off. So did the catastrophizing, which spread from people like Sam Altman, who as the company’s chief has a vested interest in making his tech sound Really Powerful.

Why do tech people like talking about things in apocalyptic terms? Let’s be fair: *everyone* loves talking about the end times. Ancient civilizations, religious texts, Hollywood movie producers, mainstream media, internet conspiracy theorists—tech executives aren’t alone. There’s something innately human about believing the world is on the cusp of cataclysm.

But for tech people specifically, I wonder: Does Sam Altman say all he needs is $7 trillion to transform the world into an AI-powered utopia because he honestly believes that’s a good use of the money (has he heard of climate change)? Is it just marketing? The more dangerous their tech sounds, the more attention it gets and therefore the more money, both in terms of investment and company valuation. That strategy… seems to be working really well! OpenAI has been valued at $80 billion

The real problem, however, is that this shiny-object narrative distracts from the actual, real-life ramifications of AI. Software that uses big tranches of data to make predictions about the world is out there, busily doing its thing. In some cases—like in Los Angeles’s scoring system for subsidized housing, Wisconsin’s Dropout Early Warning System, plagiarism detection tools, and a litany of other well-documented cases ranging from hiring to criminal justice—it’s hurting people. 

If they bothered to pay attention, I think the Great Tech Power Brokers would mostly agree with the idea that we should fix systems like the ones The Markup investigated in LA or Wisconsin so that they operate more fairly and equitably. But by ignoring these admittedly far less sexy subjects and focusing instead on arguing about how AI is going to bring about the end times—or that only through the wise stewardship of the high priests of venture capital can we be saved from destruction—we are asked to train our attention on fantasy. 

And hey, who doesn’t love a good fantasy. Just make sure to save room for reality.

Thanks for reading,

Michael Reilly

Managing Editor

The Markup

PS – if you’ve stuck with me this far or just skipped to the bottom, you deserve some cool stuff. Here are a few (mostly AI-ish) things from around the internet I found really interesting lately. Hope they give you some things to think about this weekend:

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now