Senator Wiener’s Landmark AI Safety and Innovation Bill Passes Assembly Privacy Committee
SACRAMENTO – The Assembly Privacy & Consumer Protection Committee passed SB 1047 by Senator Scott Wiener (D-San Francisco), which aims to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. The bill passed 8-0 and now heads to the Judiciary Committee.
The Assembly Privacy Committee introduce new amendments that strengthen the bill while allowing companies more flexibility by:
- Giving companies more flexibility in how they meet their responsible development obligations while still holding them liable if irresponsible behavior leads to catastrophic harm.
- Allowing a state agency to change the compute threshold for a model to be covered under the bill starting in 2027, rather than keeping it fixed in statute at 10^26 flops. The requirement that the model cost at least $100 million to train cannot be changed.
- Requiring companies get third-party safety audits by 2028.
- Strengthen whistleblower protections.
“As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 aims to do just that,” said Senator Wiener. “By taking a light touch approach to regulation that focuses exclusively on the largest companies building the most capable models, SB 1047 puts sensible guardrails in place against risk while leaving startups free to innovate without any new burdens. We know this bill is a work in progress, and we’re actively meeting with stakeholders and seeking constructive input. We accept the amendments introduced by the Committee today and look forward to continued conversations to improve the bill as it advances.”
Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have severe consequences, including risks to critical infrastructure, cyberattacks, and the creation of novel biological weapons. A recent survey found 70% of AI researchers believe safety should be prioritized in AI research more while 73% expressed “substantial” or “extreme” concern AI would fall into the hands of dangerous groups.
SB 1047 is supported by two of the most cited AI researchers of all time: the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. Of SB 1047, Professor Hinton said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.
“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.”
In line with President Biden’s Executive Order on Artificial Intelligence, and their own voluntary commitments, several frontier AI developers in California have taken great strides in pioneering safe development practices – implementing essential measures such as cybersecurity protections and safety evaluations of AI system capabilities.
Last September, Governor Newsom issued an Executive Order directing state agencies to begin preparing for AI and assess the impact of AI on vulnerable communities. The Administration released a report in November examining AI’s most beneficial uses and potential harms.
SB 1047 balances AI innovation with safety by:
- Setting clear standards for developers of AI models with computing power greater than 1026 floating-point operations that cost over $100 million to train and would be substantially more powerful than any AI in existence today
- Requiring developers of such large “frontier” AI models to take basic precautions, such as pre-deployment safety testing, red-teaming, cybersecurity, safeguards to prevent the misuse of dangerous capabilities, and post-deployment monitoring
- Creating whistleblower protections for employees of frontier AI laboratories
- Requiring transparent pricing and prohibiting price discrimination to protect consumers and to ensure startup developers have equal opportunities to compete and innovate
- Empowering California’s Attorney General to take legal action in the event the developer of an extremely powerful AI model causes severe harm to Californians or if the developer’s negligence poses an imminent threat to public safety
- Establishing a new public cloud computer cluster, CalCompute, to enable startups, researchers, and community groups to participate in the development of large-scale AI systems and align its benefits with the values and needs of California communities
- Establishing an advisory council to support safe and secure open-source AI.
SB 1047 is coauthored by Senator Roth (D-Riverside) and Senator Stern (D-Los Angeles) and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.
“The majority of the world’s top AI companies are based in California, so it’s logical for the state to lead on policies balancing AI safety with innovation and competition,” said Nathan Calvin, Senior Policy Counsel at the Center for AI Safety Action Fund. “AI is poised to fuel profound advancements that will improve our quality of life but the industry’s potential is hamstrung by a lack of public trust. The common sense safety standards for AI developers in this legislation will help ensure society gets the best AI has to offer while reducing risks that it will cause catastrophic harm.”
“We need human-centered AI that is built, designed, and governed to benefit society. By improving coordination between government, industry, and communities on AI safety, this bill will protect Californians and future generations from the risks of large-scale AI systems,” said Sneha Revanur, President and Founder of Encode Justice. “Responsible, appropriate guardrails remain a critical starting point for this technology to enhance human potential instead of threatening it.”
“The continued progress of SB 1047 brings us closer to a future where the biggest developments in AI serve the public, not just the interests and profits of a few powerful tech companies,” said Teri Olle, Director of Economic Security California. “Instead of seeing California’s leaders wait for a catastrophe to act or bend to heavy pushback from some of the biggest players in tech seeking to evade accountability, it is exciting to see common-sense, preventative safety measures for AI continue to move forward.”
###