Press Release

In a Bipartisan Vote, Senate Passes Senator Wiener’s Landmark AI Safety and Innovation Bill

SACRAMENTO – The Senate passed SB 1047 by Senator Scott Wiener (D-San Francisco), which aims to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. The bill passed 32-1 with bipartisan support and now heads to the Assembly, where it must pass by August 31.

“As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 aims to do just that,” said Senator Wiener. “By focusing its requirements on the well-resourced developers of the largest and most powerful frontier models, SB 1047 puts sensible guardrails in place against risk while leaving startups free to innovate without any new burdens. We know this bill is a work in progress, and we’re actively meeting with stakeholders and seeking constructive input. We’ll continue working to improve this legislation in the Assembly, and I look forward to working with my colleagues to support responsible AI development in California.”

Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have severe consequences, including risks to critical infrastructure, cyberattacks, and the creation of novel biological weapons. A recent survey found 70% of AI researchers believe safety should be prioritized in AI research more while 73% expressed “substantial” or “extreme” concern AI would fall into the hands of dangerous groups. 

SB 1047 is supported by two of the most cited AI researchers of all time: the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. Of SB 1047, Professor Hinton said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. 

“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.” 

In line with President Biden’s Executive Order on Artificial Intelligence, and their own voluntary commitments, several frontier AI developers in California have taken great strides in pioneering safe development practices – implementing essential measures such as cybersecurity protections and safety evaluations of AI system capabilities. 

Last September, Governor Newsom issued an Executive Order directing state agencies to begin preparing for AI and assess the impact of AI on vulnerable communities. The Administration released a report in November examining AI’s most beneficial uses and potential harms. 

SB 1047 balances AI innovation with safety by:

  • Setting clear standards for developers of AI models with computing power greater than 1026 floating-point operations that cost over $100 million to train and would be substantially more powerful than any AI in existence today 
  • Requiring developers of such large “frontier” AI models to take basic precautions, such as pre-deployment safety testing, red-teaming, cybersecurity, safeguards to prevent the misuse of dangerous capabilities, and post-deployment monitoring
  • Creating whistleblower protections for employees of frontier AI laboratories
  • Requiring transparent pricing and prohibiting price discrimination to protect consumers and to ensure startup developers have equal opportunities to compete and innovate
  • Empowering California’s Attorney General to take legal action in the event the developer of an extremely powerful AI model causes severe harm to Californians or if the developer’s negligence poses an imminent threat to public safety
  • Establishing a new public cloud computer cluster, CalCompute, to enable startups, researchers, and community groups to participate in the development of large-scale AI systems and align its benefits with the values and needs of California communities
  • Establishing an advisory council to support safe and secure open-source AI.

SB 1047 is coauthored by Senator Roth (D-Riverside) and Senator Stern (D-Los Angeles) and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.

“California is showing the nation how to balance AI innovation with safety by establishing clear, predictable, common sense legal standards for AI companies. We thank the Senators who supported this pioneering bill,” said Nathan Calvin, Senior Policy Counsel at the Center for AI Safety Action Fund. “As the home to many of the largest and most innovative AI companies, California should be leading the way with industry-leading policies for safe, responsible AI while also making this incredible technology accessible to academic researchers and startups to encourage innovation and competition."

“I’m thrilled that the Senate has passed SB1047 with bipartisan support,” said Sunny Gandhi, VP, Political Affairs, Encode Justice. “This legislation will help bring us closer to ensuring AI is developed responsibly, with the necessary guardrails to protect our generation's future. We urge the Assembly to swiftly approve the bill and establish California as a leader in safe and equitable AI innovation.”

“The passage of SB 1047 through the Senate brings us one big step closer to a future where the biggest developments in tech are incentivized to serve the public, not just the interests and profits of a few powerful tech companies,” said Teri Olle, Director of Economic Security California. “Beyond regulating the largest models, SB 1047 will create CalCompute, a state-of-the-art, high-powered computing infrastructure that can provide access to AI systems  for researchers, academics, and startups who do not have hundreds of millions to invest. It opens the door to truly embrace innovation and competition.

###