Skip navigation

Artificial Intelligence

Trump Wants Even Looser AI Guardrails. Why California, Despite Passing Over 20 AI Bills This Year, Might Not Push Back

President-elect Trump has vowed to rescind an executive order that imposed AI safeguards, and could use tech to enable mass deportations. How far will California go in the other direction?

Photograph of a surveillance tower monitor, with a background landscape of a river and mountains
A U.S. Customs and Border Protection surveillance tower monitors activity along the U.S.–Mexico border fence in Calexico, California John Moore/Getty Images

The Markup, now a part of CalMatters, uses investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up for Klaxon, a newsletter that delivers our stories and tools directly to your inbox.

California Gov. Gavin Newsom is preparing to wage a legal war against President-elect Donald Trump, convening a special legislative session next month to try to “Trump-proof” the state. But it appears that artificial intelligence safeguards won’t initially be in the fight, even though California’s legislature had placed a major focus on AI regulations this year.

Trump has promised to immediately rescind President Joe Biden’s executive order that had imposed voluntary AI guardrails on tech companies and federal agencies. The president-elect’s administration could also, immigrant advocates say, use AI tools to assist the mass deportation he has pledged to implement. 

Earlier this year, California legislators passed more than a dozen bills regulating artificial intelligence, curbing the use of algorithms on children, limiting the use of deepfakes, and more. But Gov. Newsroom vetoed the most ambitious — and contentious bill — that would have required testing of AI models to determine whether they would likely lead to mass death, endanger public infrastructure or enable severe cyberattacks. 

There are signs, though, that AI could — in the not-so-distant future  — go from abstract concern to prominent political cudgel between the Trump administration and California’s Democratic leaders. It could be another high-profile way to challenge Trump and his newfound tech allies, some of whom have gleefully proclaimed a new, deregulated era for artificial intelligence products.

“I think Newsom and the California Legislature have an opportunity to step into the gap that the federal government is leaving — to create a model environment for safe and rights-respecting technology and deployment,” said Janet Haven, executive director of the Data & Society Research Institute, a nonprofit that studies the social implications of AI and other technologies. “On the other hand, there’s no way to get around the fact that Big Tech is right there, and will be a huge factor in whatever the California Legislature and Newsom want to advance in terms of AI legislation.”

↩︎ link

Why California lawmakers and others worry about AI

AI safety advocates told CalMatters they’re not necessarily sweating the apocalyptic AI nightmares imagined by some doomsayers. Instead, they are focused on how AI tools are increasingly used in healthcare, housing, the labor force, law enforcement, immigration, the military, as well as other industries and fields prone to discrimination, surveillance, and civil rights violations — because there’s evidence that such tools can be unwieldy, inaccurate, and invasive. “We have documentation that shows how these AI systems are likely to do all sorts of things—they’re pattern-making systems, they’re not really decision-makers, but the private sector and the public sector are using them as a substitute for decision-makers,” said Samantha Gordon, chief program officer at TechEquity. “That’s not wise.”

Santa Ana Democratic Sen. Tom Umberg told CalMatters that 2024 “was a bit of a testing year” for AI bills. California lawmakers outlawed sexually explicit deepfakes and certain election-related deepfake content, required tech companies to provide free AI detection tools, and stipulated that tech companies must publicly release data about their AI training tools.

Gov. Newsom ultimately signed roughly 20 AI bills into law. But he also controversially vetoed a major bill by San Francisco Democratic Sen. Scott Wiener that would’ve instituted significant testing requirements on AI tools to make sure they avoid catastrophic outcomes. In his veto message, Newsom wrote that the bill risked curtailing innovation, but he added that he wanted to “find the appropriate path forward, including legislation and regulation.” 

Wiener told CalMatters he’s working on updated legislation that could garner “broader support.” Such a bill would presumably include additional buy-in from the tech sector, which the state is relying on for tax revenues, and which has a notable lobbying presence in Sacramento — Google just racked up the largest quarterly lobbying tab in a decade.

Asked whether to expect more Big Tech lobbying against regulatory efforts in California, Palo Alto Democratic Assemblymember Marc Berman said: “It’s going to be a good time to be a lobbyist. They’re going to do very well.”

Though Wiener’s AI testing bill was batted down, as were a few other noteworthy AI bills that didn’t make it out of the Legislature, California is “far and away the center of AI regulation in the U.S,” said Ashok Ayyar, a Stanford research fellow who co-wrote a comparative analysis of Wiener’s bill against the European Union’s more comprehensive AI efforts.

↩︎ link

A lack of federal regulation and legislation

California is leading on AI in large part because the competition is basically non-existent.

Congress hasn’t passed meaningful AI legislation. Asked about Trump and the incoming Republican majority, San Ramon Democratic Assemblymember Rebecca Bauer-Kahan said, “There isn’t much regulation to deregulate, to be honest.”

Sans federal legislation, President Biden issued an executive order in October 2023 intended to place guardrails around the use of AI. The order built on five policy principles on the “design, use, and deployment of automated systems to protect the American public.” Biden directed federal agencies “to develop plans for how they would advance innovation in the government use of AI, but also protect against known harms and rights violations,” said Haven. Soon after Biden’s executive order, his administration created the U.S. AI Safety Institute, which is housed within the Commerce Department. 

Biden’s executive order relies on tech companies, many of which are based in California, to voluntarily embrace the administration’s suggestions; it also relies on agencies like the Department of Homeland Security, which includes Immigration and Customs Enforcement and Customs and Border Protection, to be transparent and honest about how they’re using AI technology and not violate people’s civil rights. 

Like most executive orders, Biden’s AI edict is loosely enforceable and fairly easy to reverse.

Trump has already promised to repeal Biden’s executive order on day one of his term; the 2024 Republican platform argues that the executive order “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Homeland Security and other executive branch agencies may be granted far more flexibility when Trump takes office, though advocates say the bar was already low; a June 2024 report from the nonprofit Mijente titled “Automating Deportation” argues the department hasn’t followed through on the Biden administration’s already relatively meager requests.

After Trump clinched the 2024 presidential election, segments of the tech industry were jubilant about what they foresee for the AI industry—including an imminent uptick in government contracts. “Stick a fork in it, it’s over,” Marc Andreessen, the billionaire general partner of venture capital firm Andreessen Horowitz, wrote on X. “The US will be the preeminent AI superpower in the world after all.”

↩︎ link

Fully unleashed federal agencies

If mass deportation of undocumented immigrants come to pass, as Trump has promised, that would require a wide variety of technologies, including AI tools. Homeland Security already employs an AI system called the Repository for Analytics in a Virtualized Environment, or RAVEn, a nine-figure government contract. The department also has access to an extensive biometric database, and monitors certain undocumented immigrants outside of detention centers via a surveillance tool that utilizes AI algorithms to try to determine whether an immigrant is likely to abscond. 

“We know from Trump’s first administration that there are going to be fewer guardrails with the use of this tech, and agents will feel even more emboldened,” said Sejal Zota, co-founder and legal director of Just Futures Law, a legal advocacy group focused on immigration, criminal justice and surveillance issues. “That’s one area where we’re going to see increased AI use to support this mass deportation agenda.”

To the best of Zota’s knowledge, there’s little California lawmakers or courts could do to prevent federal agencies from using AI tech against vulnerable populations, including undocumented immigrants. “Is it an issue? Absolutely, it’s an issue,” said Sen. Umberg. “What can we do about it? What can we do about federal agencies using artificial intelligence? We can’t do much.”

Estimates show there are at least 1.8 million undocumented immigrants in California.

A conference hall with attendees and colorful signage related to artificial intelligence.
Caption: The Dreamforce conference hosted by Salesforce in San Francisco on Sept. 18, 2024. Dreamforce is an annual tech conference attracting thousands of participants and is the largest AI event in the world, according to Salesforce. Photo by Florence Middleton for CalMatters
↩︎ link

What if Congress acts?

Another potential threat to California’s AI regulations is if the majority Republican Congress passes looser AI rules of its own, preempting state law. California lawmakers, including Assemblymember Bauer-Kahan and Sen. Umberg, said they don’t think significant AI legislation will make it to President Trump for his signature. 

Congressional gridlock is one reason Sen. Wiener said he’s pursuing AI regulation in the California Legislature in the first place: “I was very clear that if (the issue) were being handled statutorily at the federal level, I’d be happy to close up shop and go home,” he said. “But it wasn’t happening, and it’s certainly not going to happen under Trump.”

Not everyone believes Congress will remain stagnant on this issue, however, particularly with one party now dominant in Washington. “I wouldn’t underestimate the creativity of this incoming administration,” said Paromita Shah, executive director of Just Futures Law.

Added Haven: “I think it’s possible that with a Republican trifecta, we’ll see an attempt to pass a very weak data privacy law at the federal level that preempts state law. Then it’s a game of whack-a-mole between the state legislature and the federal legislature.”

↩︎ link

California’s next steps

Newsom has to date signed many AI bills but turned back others he says go too far and risk inhibiting  an industry he has sought to cultivate as a government partner. A spokesperson for Newsom did not directly respond to CalMatters’ questions for this story, instead providing a statement highlighting the state’s role in shaping the future of so-called “generative AI,” a recent and innovative form of the technology behind tools like ChatGPT, DALL-E, and Midjourney: “California has led the nation in protecting against the harms of GenAI while leveraging its potential benefits,” said spokesperson Alex Stack. 

President-elect Trump’s team did not respond to written questions from CalMatters.

Dan Schnur, a political analyst and professor at UC Berkeley and other campuses, predicted the governor will save his political capital for other clashes. “Newsom’s incentive for strengthening his relationship with Silicon Valley is probably stronger than his need for yet one more issue to fight over with Donald Trump,” Schnur said.

Florence G’Sell, a visiting professor at Stanford’s cyber policy center, cautioned Newsom against clinging to the deregulatory side of Silicon Valley. “There is really a very strong movement that wants to highlight the risks of AI, the safety questions,” G’Sell said. “If I were the governor, I wouldn’t be insensitive to this movement and the warnings.” 

Lawmakers are eyeing other avenues to shore up Californians’ redresses against AI technology. Assemblymember Bauer-Kahan previously told CalMatters she plans to reintroduce a stronger version of a bill, which failed to advance past the Legislature last session, to crack down on discriminatory AI practices. Another top AI priority, according to Menlo Park Democratic Sen. Josh Becker, is less sexy, but perhaps just as important: “closely monitor the implementation of this year’s regulatory framework (that we just passed),” he wrote. 

California’s next AI regulatory steps were always going to be intensely analyzed. That’s even more so the case now, with Trump returning to office—a challenge state lawmakers are embracing.

“One of the things that is somewhat amusing to me is when folks come to me and say, ‘Whatever you do in California is going to set the standard for the country,’ Sen. Umberg said. “As a policymaker, that’s catnip. That’s why I ran for office.”

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now