Senator Wiener’s Landmark Responsible AI Innovation Bill Advances to Final Vote
SB 53 enacts world-leading AI safety disclosure requirements, creates a public cloud compute cluster to foster democratized AI innovation, and enacts protections for whistleblowers at leading AI labs.
SACRAMENTO – Senator Scott Wiener (D-San Francisco)’s Senate Bill 53 is advancing to its final votes in the Assembly and Senate this week after final amendments were made to more closely align the bill with the recommendations of Governor Newsom’s Joint Policy Working Group on Frontier AI Models. This bill language is the result of discussions with the Governor’s Administration and stakeholders. The final bill expands disclosure obligations to smaller developers and retains its core provisions: strong, world-leading safety disclosure requirements, public investment in AI infrastructure for startups and researchers, and protections for whistleblowers at leading AI labs.
“The final version of SB 53 will ensure California continues to lead not only on AI innovation, but on responsible practices to help ensure that innovation is safe and secure,” said Senator Wiener. “I’m grateful to the Administration for convening the Joint AI Working Group and discussing with us how to best implement its recommendations to make our bill as scientific and fair as it can be. I look forward to this week’s vote to send this world-leading bill to the Governor’s desk.”
The changes to SB 53 are as follows:
- The bill’s requirements now apply only to models trained above 10^26. The only exception is whistleblower protections, which apply to whistleblowers working on models of any size.
- Frontier models trained by companies that do not meet the revenue threshold have new obligations to disclose basic, high-level safety details. Models trained by companies above the revenue threshold must make more detailed disclosures.
- Safety disclosures have been streamlined and simplified.
- The Attorney General no longer has the authority to issue regulations adjusting definitions. Instead, CDT will produce an annual report recommending changes to the legislature.
- Developers are required to review and, as appropriate, update their framework annually.
- Penalties have been capped at $1 million per violation.
- Instead of reporting regular risk assessments of internal use of models publicly, companies are to send those reports confidentially to the Governor’s Office of Emergency Services (OES).
SB 53 contains a world-first requirement that companies disclose safety incidents involving dangerous deceptive behavior by autonomous AI systems. For example, developers typically place controls on AI systems to prevent systems from assisting with the construction of a bioweapon or other highly dangerous tasks set by users. If developers catch the AI system in a lie about how well those controls are working during routine testing, and that lie materially increases the risk of a catastrophic harm, the developer would be required to disclose that incident to the Office of Emergency Services under SB 53. These disclosure requirements around deceptive behavior are increasingly critical as AI systems are deployed in more and more consequential contexts.
SB 53 retains provisions — called “CalCompute” — that advance a bold industrial strategy to boost AI development and democratize access to the most advanced AI models and tools. CalCompute will be a public cloud compute cluster housed at the University of California that provides free and low-cost access to compute for startups and academic researchers. CalCompute builds on Senator Wiener’s recent legislation to boost semiconductor and other advanced manufacturing in California by streamlining permit approvals for advanced manufacturing plants, and his work to protect democratic access to the internet by authoring the nation’s strongest net neutrality law.
SB 53 also retains protections for whistleblowers at AI labs who disclose significant risks.
Weeks ago, the U.S. Senate voted 99-1 to remove provisions of President Trump’s “Big Beautiful Bill” that would have prevented states from enacting AI regulations. By boosting transparency, SB 53 builds on this vote for accountability.
As AI advances, risks and benefits grow
Recent advances in AI have delivered breakthrough benefits across several industries, from accelerating drug discovery and medical diagnostics to improving climate modeling and wildfire prediction. AI systems are revolutionizing education, increasing agricultural productivity, and helping solve complex scientific challenges.
However, the world’s most advanced AI companies and researchers acknowledge that as their models become more powerful, they also pose increasing risks of catastrophic damage. The Working Group report states:
Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the draft of this report in March 2025. Frontier AI companies’ [including OpenAI and Anthropic] own reporting reveals concerning capability jumps across threat categories.
To address these risks, AI developers like Meta, Google, OpenAI, and Anthropic have entered voluntary commitments to conduct safety testing and establish robust safety and security protocols. Several California-based frontier AI developers have designed industry-leading safety practices including safety evaluations and cybersecurity protections. SB 53 codifies these voluntary commitments to establish a level playing field and ensure greater accountability across the industry.
Background on the report
Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models in September 2024, following his veto of Senator Wiener’s SB 1047, tasking the group to “help California develop workable guardrails for deploying GenAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
The Working Group is led by experts including the “godmother of AI” Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society.
On June 17, the Working Group released their Final Report. While the report does not endorse specific legislation, it promotes a “trust, but verify” framework to establish guardrails that reduce material risks while supporting continued innovation.
SB 53 balances AI risk with benefits
Drawing on recommendations of the Working Group Report, SB 53:
- Establishes transparency into large companies’ safety and security protocols and risk evaluations. Companies will be required to disclose their safety and security protocols and risk evaluations in redacted form to protect intellectual property.
- Mandates reporting of critical safety incidents (e.g., model-enabled CBRN threats, major cyber-attacks, or loss of model control) within 15 days to the Governor’s OES.
- Protects employees who reveal evidence of critical risk or violations of the act by AI developers.
Under SB 53, the Attorney General imposes civil penalties for violations of the act. SB 53 does not impose any new liability for harms caused by AI systems.
SB 53 is sponsored by the Encode AI, Economic Security Action California, and the Secure AI Project.
“SB 53 shows that safety and innovation are compatible,” said Sunny Gandhi, Vice President of Political Affairs at Encode AI, a co-sponsor of the bill. “By requiring transparency and accountability from the largest AI developers, the bill makes clear that California can lead on both responsible governance and cutting-edge innovation.
“The Governor’s Working Group emphasized a simple principle: ‘trust but verify,’” said Andrew Doris, Senior Policy Analyst at the Secure AI Project, also a co-sponsor of the bill. “There’s now a broad consensus that the largest AI developers need to be transparent about their safety practices and report serious incidents. SB 53 puts that consensus into law, turning expert recommendations into safeguards for Californians.”
“California once again has a choice to make on AI: let Big Tech billionaires create their own rules, or pass SB 53 and make safety, transparency, and public interest a priority,” said Teri Olle, Director of Economic Security California Action, a co-sponsor of the bill. “The timing of this bill couldn’t be more critical. As federal lawmakers shirk their responsibility and propose blanket bans on AI regulation and Silicon Valley leaders cozy up to the Trump Administration, California can set the standard for responsible AI innovation for the world.”
###