Shine a light on tech’s hidden impacts. Triple your donation.
Skip navigation

Artificial Intelligence

Why Silicon Valley Is Trying So Hard to Kill This AI Bill in California

The sprawling California legislation offers protection to whistleblowers and citizens. The coming weeks could decide its fate

Photo by Jeff Chiu, AP Photo

Though lawmakers and advocates proposed dozens of bills to regulate artificial intelligence in California this year, none have attracted more disdain from big tech companies, startup founders, and investors than the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

In letters to lawmakers, Meta said the legislation, Senate Bill 1047, will “deter AI innovation in California at a time where we should be promoting it,” while Google claimed the bill will make “California one of the world’s least favorable jurisdictions for AI development and deployment.” A letter signed by more than 130 startup founders and incubator Y Combinator goes even further, claiming that “vague language” could “kill California tech.”

Prominent AI researchers are also taking sides. Last week, Yoshua Bengio and former Google AI researcher Geoffrey Hinton, who are sometimes called the “godfathers of AI,” came out in support of the bill. Stanford professor and former Google Cloud chief AI scientist Fei-Fei Li, who is often called the “godmother of AI” came out against SB 1047

The bill, approved 32-1 by the state Senate in May, must survive the Assembly Appropriations suspense file on Thursday and win final approval by Aug. 31 to reach Gov. Gavin Newsom this year. 

The bill, introduced by San Francisco Democrat Scott Wiener in February, is sprawling. It would:

  • Require developers of the most costly and powerful AI tools to test whether they can enable attacks on public infrastructure, highly damaging cyber attacks, or mass casualty events; or can help create chemical, biological, radioactive, or nuclear weapons. 
  • Establish CalCompute, a public “cloud” of shared computers that could be used to help build and host AI tools, to offer an alternative to the small handful of big tech companies offering cloud computing services, to conduct research into what the bill calls “the safe and secure deployment of large-scale artificial intelligence models,” and to foster the equitable development of technology.
  • Protect whistleblowers at companies that are building advanced forms of AI and contractors to those companies.

The latter protections are among the reasons whistleblower and former OpenAI employee Daniel Kokotajlo supports SB 1047, he told CalMatters. He also likes that it takes steps toward more transparency and democratic governance around artificial intelligence, a technology he describes as “completely unregulated.”

Kokotajlo earlier this year quit his job as a governance researcher at OpenAI, the San Francisco-based company behind the popular ChatGPT tool. Shortly thereafter he went public with allegations that he witnessed a violation of internal safety protocols at the company. OpenAI was “recklessly racing” toward its stated goal of creating artificial intelligence that surpasses human intelligence, Koktajlo told the New York Times. Kokotajlo also believes that advanced AI could contribute to the extinction of humanity — and that employees developing that technology are in the best position to guard against this.

In June, Kokotajlo joined more than a dozen current and former employees of OpenAI and Google in calling for enhanced protections for AI whistleblowers. Those workers were not the first to do so; Google employees spoke out in 2021 after co-leads of the Ethical AI team were fired. That same year, Ifeyoma Ozoma, the author of a tech whistleblower handbook and a former Instagram employee, cosponsored California’s Silenced No More Act, a state law passed in 2022 to give workers the right to talk about discrimination and harassment even if they signed a non-disclosure agreement.

Kokotaljo said he believes that, had SB 1047 been in effect, it would have either prevented, or led an employee to promptly report, the safety violation he said he witnessed in 2022, involving an early deployment of an OpenAI model by Microsoft to a few thousand users in India without approval.

“I think that when push comes to shove, and a lot of money and power and reputation is on the line, things are moving very quickly with powerful new models,” he told CalMatters. “I don’t think the company should be trusted to follow their own procedures appropriately.” 

When asked about Kokotajlo’s comments and OpenAI’s treatment of whistleblowers, OpenAI spokesperson Liz Bourgeois said company policy protects employees’ rights to raise issues. 

Existing law primarily protects whistleblowers from retaliation in cases involving violation of state law, but SB 1047 would protect employees like Kokotajlo by giving them the right to report to the attorney general or labor commissioner any AI model that is capable of causing critical harm. The bill also prevents employers from blocking the disclosure of related information.

Whistleblower protections in SB 1047 were expanded following a recommendation by the Assembly Privacy and Consumer Protection committee in June. That recommendation came shortly after the letter from workers at Google and OpenAI, after OpenAI disbanded a safety and security committee, and after Vox reported that OpenAI forced people leaving the company to sign nondisparagement agreements or forfeit stock options worth up to millions of dollars. The protections address a concern from the letter that existing whistleblower protections are insufficient “because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

Employees must be able to report dangerous practices without fear of retaliation.

California Assemblymember Rebecca Bauer-Kahan, Democrat from San Ramon

OpenAI spokesperson Hannah Wong said the company removed nondisparagement terms affecting departing employees. Despite these changes, last month a group of former OpenAI employees urged the Securities and Exchange Commission to investigate nondisclosure agreements at the company as possible violations of  an executive order signed by President Joe Biden last year to reduce risks posed by artificial intelligence. 

Bay Area Democrat Rebecca Bauer-Kahan, who leads the Assembly Privacy and Consumer Protection Committee, said she helped add the whistleblower protections to SB 1047 because industry insiders have reported feeling muzzled by punitive non-disclosure agreements, even as more of them speak out about problems with AI. 

“If Californians are going to feel comfortable engaging with these novel technologies, employees must be able to report dangerous practices without fear of retaliation,” she said in a written statement. “The protections the government provides should not be limited to the known risks of advanced AI, as these systems may be capable of causing harms that we cannot yet predict.”

↩︎ link

Industry Says Bill Imperils Open Source, Startups

As vocal as they’ve been in opposing SB 1047, tech giants have said little about the bill’s whistleblower protections, including in lengthy letters that Meta, Microsoft, and Google sent to lawmakers. Google declined to comment about those provisions, while Meta declined to make California public policy lead Kevin McKinley available for comment. OpenAI pointed to a previous comment by Bourgeois that stated, “We believe rigorous debate about this technology is essential. OpenAI’s whistleblower policy protects employees’ rights to raise issues, including to any national, federal, state, or local government agency.”

Instead, opponents have highlighted the bill’s AI testing requirements and other safety provisions, saying compliance costs could kneecap startups and other small businesses. This would hurt the state economy, they add, since California is a center of the AI industry. The bill, however,  limits its AI restrictions to systems that cost more than $100 million, or require more than a certain quantity of computing power to train. Supporters say the vast majority of startups won’t be covered by the bill.

Opponents counter that small businesses would still suffer because SB 1047 would have a chilling effect on individuals and groups that release AI models and tools free to the public as open source software. Such software is widely used by startups, holding down costs and providing them a basis on which to build new tools. Meta has argued that developers of AI software will be less likely to release it as open source out of fear they will be held responsible for all the ways their code might be used by others.

If we over regulate, if we over indulge and chase a shiny object, we can put ourselves in a perilous position.

California Governor Gavin Newsom

Open source software has a long history in California and has played a central role in the development of AI. In 2018, Google released as open source its influential “BERT,” an AI model that laid the groundwork for large language models such as the one behind ChatGPT and that sparked an AI arms race between companies including Google, Microsoft, and Nvidia. Other open source software tools have also played important roles in the spread of AI, including Apache Spark, which helps distribute computing tasks across multiple machines,  and Google’s TensorFlow and Meta’s PyTorch, both of which allow developers to incorporate machine learning techniques into their software.

Meta has gone farther than its competitors in releasing the source code to its own large language model, Llama, which has been downloaded more than 300 million times. In a letter sent to Wiener in June, Meta deputy chief privacy officer Rob Sherman argued that the bill would “deter AI innovation in California at a time when we should be promoting it” and discourage release of open source models like Llama.

Ion Stoica is a professor at the University of California, Berkeley and cofounder of Databricks, an AI company built on Apache Spark. If SB 1047 passes, he predicts that within a year open source models from overseas, likely China, will overtake those made in the United States. Three of the top six top open source models available today come from China, according to the Chatbot Arena evaluation method Stoica helped devise. 

Open source defenders also voiced opposition to SB 1047 at a town hall hosted with Wiener at GitHub, an open source repository owned by Microsoft, and a generative AI symposium held in May.

Governor Gavin Newsom, who has not taken a position on the legislation, told the audience it’s important to respond to AI inventors like Geoffrey Hinton who insist on the need for regulation , but also said he wants California to remain an AI leader and advised lawmakers against overreach. “If we over regulate, if we over indulge and chase a shiny object, we can put ourselves in a perilous position,” the governor said. “At the same time we have an obligation to lead.”

↩︎ link

Aiming To Protect Tech Workers and Society

Sunny Gandhi, vice president of government affairs at Encode Justice, a nonprofit focused on bringing young people into the fight against AI harms and a cosponsor of the bill, said it has sparked a backlash because tech firms are not used to being held responsible for the effects of their products 

“It’s very different and terrifying for them that they are now being held to the same standards that pretty much all other products are in America,” Ghandi said.” There are liability provisions in there, and liability is alien to tech. That’s what they’re worried about.” 

Wiener has disputed some criticisms of his bill, including a claim, in a letter circulated by startup incubator Y Combinator and signed by more than 130 startup founders, that the legislation could end up sending software developers “to jail simply for failing to anticipate misuse of their software.” That assertion arose from the fact that the bill requires builders of sufficiently large language models to submit their test results to the state and makes them guilty of perjury if they lie about the design or testing of an AI model.

It’s very different and terrifying for them that they are now being held to the same standards that pretty much all other products are in America.

Sunny Gandhi, vice president of government affairs at Encode Justice

Wiener said his office started listening to members of the tech community last fall before the bill was introduced and made a number of amendments to ensure the law only applies to major AI labs. Now is the time to act, he told startup founders, “because I don’t have any confidence the federal government is going to act” to regulate AI.

Within the past year, major AI labs signed on to testing and safety commitments with the White House and at international gatherings in the United Kingdom, Germany, and South Korea, but those agreements are voluntary. President Biden has called on Congress to regulate artificial intelligence but it has yet to do so.

Wiener also said the bill is important because the Republican Party vowed, in the platform it adopted last month, to repeal Biden’s executive order, arguing that the order stifles innovation.

In legislative hearings, Wiener has said it’s important to require compliance because “we don’t know who will run these companies in a year or five years and what kind of profit pressures those companies will face at that time.”

AI company Anthropic, which is based in San Francisco, came out in support of the bill if a number of amendments are made, including doing away with a government entity called the Frontier Models Division. That division, which would review certifications from developers, establish an accreditation process for those who audit AI, and issue guidance on how to limit harms from advanced AI. Wiener told the Y Combinator audience he’d be open to doing away with the division.

Kokotajlo, the whistleblower,  calls SB 1047 both a step in the right direction and not enough to prevent the potential harms of AI. He and the other signatories of the June letter have called on companies that are developing AI to create their own processes whereby current and former employees could anonymously report concerns to independent organizations with the expertise to verify whether concern is called for or not.

“Sometimes the people who are worried will turn out to be wrong, and sometimes, I think the people who are worried will turn out to be right,” he said.

In remarks at Y Combinator last month, Wiener thanked members of open source and AI communities for sharing critiques of the bill that led to amendments, but he also urged people to remember what happened when California passed privacy law in 2018 following years of inaction by the federal government. 

“A lot of folks in the tech world were opposed to that bill and told us that everyone was going to leave California if we passed it. We passed it. That did not happen, and we set a standard that I think was a really powerful one.”

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now