Press Release

Senator Wiener Responds to Governor Newsom Vetoing Landmark AI Bill

SACRAMENTO –  Governor Newsom vetoed the landmark California AI safety bill, SB 1047 by Senator Scott Wiener (D-San Francisco). The bill would have enacted common sense, first-in-the-nation safeguards to protect the public from AI being used to conduct cyberattacks on critical infrastructure; develop chemical, nuclear or biological weapons; or unleash automated crime. It also established CalCompute, a public cloud cluster to speed the development of responsible AI systems by providing low cost access to compute to researchers and startups. 

“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” said Senator Wiener. “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.

“The Governor’s veto message lists a range of criticisms of SB 1047: that the bill doesn’t go far enough, yet goes too far; that the risks are urgent but we must move with caution. SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd.

“While we would have welcomed this input from his office during the legislative process when there was time to make changes to the bill, I am glad to see the Governor agree that the risks presented by AI are real and that California has a role to play in mitigating them. AI continues to advance very rapidly, and the risks these systems present advance along with them. Regulators must be willing to grapple with that reality and take decisive action that protects our innovation ecosystem as we craft regulations for this emerging industry. I look forward to engaging with the Governor’s AI safety working group in the Legislature next year as we work to ensure that the safeguards California enacts adequately protect the public while we still have an opportunity to act before a catastrophe occurs.

“This veto is a missed opportunity for California to once again lead on innovative tech regulation — just as we did around data privacy and net neutrality — and we are all less safe as a result.

“At the same time, the debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage. Major AI labs were forced to get specific on the protections they can provide to the public through policy and oversight. Leaders from across civil society, from Hollywood to women’s groups to youth activists, found their voice to advocate for commonsense, proactive technology safeguards to protect society from forseeable risks. The work of this incredible coalition will continue to bear fruit as the international community contemplates the best ways to protect the public from the risks presented by AI.

“California will continue to lead in that conversation — we are not going anywhere.”

SB 1047 is supported by both of the top two most cited AI researchers of all time: the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. Professor Bengio published an op-ed in Fortune in support of the bill. 

Of SB 1047, Professor Hinton, former AI lead at Google, said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. 

 

“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.”

 

Statements of SB 1047 co-sponsors follow.

Nathan Calvin, Senior Policy Counsel at Center for AI Safety Action Fund, said:

"We are disappointed by Governor Newsom's decision to veto this urgent and common sense safety bill. Experts have noted catastrophic threats to society from AI may materialize quickly, so today’s veto constitutes an unnecessary and dangerous gamble with the public’s safety.  With rapidly growing investment in AI and increasing potential for this technology to be used for good and harm, AI safety is a critical issue that is here to stay. People want their leaders to take action and we remain committed to advocating fiercely for AI safety in California. This bill inspired a national movement for action on AI safety and we’re just getting started.”

Teri Olle, Director of Economic Security California Action, added:

“Governor Newsom’s veto of SB 1047 forfeits our country’s most promising opportunity to implement responsible guardrails around the development of AI today. The failure of this bill demonstrates the enduring power and influence of the deep pocketed tech industry, driven by the need to maintain the status quo – a hands-off regulatory environment and exponential profit margins. The vast majority of Californians, and American voters, want their leaders to prioritize AI safety and don't trust companies to prioritize safety on their own. This veto exposes a dangerous disconnect between public interest and policy action when it comes to AI – a disconnect that needs urgent repair." A more detailed release from ESCA is here.

Sunny Gandhi, Vice President of Political Affairs at Encode Justice, stated:

"This veto is disappointing but we will not be stopped by it. The bill energized youth leaders across the country eager to see common-sense AI safety reform. AI is an exciting technology that will define the future but it’s too powerful to be unleashed in a way where youth inherit a world bearing the costs of what gets broken along the way. Without safeguards like this, AI systems may soon be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid, or creating biological weapons, leading to even more public distrust in AI. Our fight for AI safety continues. We will push for responsible AI governance that protects public safety while fostering innovation. We don’t have to choose when we deserve both."

Background on SB 1047:

Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have severe consequences, including risks to critical infrastructure, cyberattacks, and the creation of novel biological weapons. A recent survey found 70% of AI researchers believe safety should be prioritized in AI research more while 73% expressed “substantial” or “extreme” concern AI would fall into the hands of dangerous groups. 

Public polling has repeatedly shown overwhelming, bipartisan support of SB 1047 among the public. Tech workers are even more likely than members of the general public to support the bill.

SB 1047 would require developers of the most advanced AI systems to test their models for the ability to cause critical harm. Developers would also be required to put into place common-sense guardrails to help mitigate against risk. The legislation would cover only the most powerful AI systems, costing over $100 million to train. 

The bill would also establish a new public cloud computing cluster, CalCompute, to enable startups, researchers, and community groups to participate in the responsible development of large-scale AI systems. In order to drive better safety outcomes across the AI ecosystem it would also create whistleblower protections for employees of frontier AI laboratories. 

SB 1047 is coauthored by Senator Roth (D-Riverside), Senator Susan Rubio (D-Baldwin Park) and Senator Stern (D-Los Angeles) and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.

 

###