Governor Newsom Signs Senator Wiener’s Landmark AI Law To Set Commonsense Guardrails, Boost Innovation
SB 53 enacts the nation’s first transparency requirements for safety plans on the most advanced AI models, establishes a public cloud compute cluster to foster democratized AI innovation, and enacts protections for whistleblowers at leading AI labs.
SACRAMENTO – Governor Newsom signed Senator Scott Wiener (D-San Francisco)’s Senate Bill (SB) 53 into law. The law, which passed the Legislature in a bipartisan vote, enacts recommendations of a working group of some of the world’s leading AI experts convened by Governor Newsom last year. Building on the report’s “trust, but verify” approach, SB 53 requires the largest AI companies to publicly disclose their safety and security protocols, report the most critical safety incidents, and protect whistleblowers. It also advances an industrial policy for AI by creating “CalCompute,” a public cloud compute cluster that provides AI infrastructure for startups and researchers.
“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk. With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” said Senator Wiener. “I’m grateful to the Governor for his leadership in convening the Joint California AI Policy Working Group, working with us to refine the legislation, and now signing it into law. His Administration’s partnership helped this groundbreaking legislation promote innovation and establish guardrails for trust, fairness, and accountability in the most remarkable new technology in many years.”
Weeks ago, the U.S. Senate voted 99-1 to remove provisions of President Trump’s “Big Beautiful Bill” that would have prevented states from enacting AI regulations. By boosting transparency, SB 53 builds on this vote for accountability.
CalCompute builds on Senator Wiener’s recent legislation to boost semiconductor and other advanced manufacturing in California by streamlining permit approvals for advanced manufacturing plants, and his work to protect democratic access to the internet by authoring the nation’s strongest net neutrality law.
As AI advances, risks and benefits grow
Recent advances in AI have delivered breakthrough benefits across several industries, from accelerating drug discovery and medical diagnostics to improving climate modeling and wildfire prediction. AI systems are revolutionizing education, increasing agricultural productivity, and helping solve complex scientific challenges.
However, the world’s most advanced AI companies and researchers acknowledge that as their models become more powerful, they also pose increasing risks of catastrophic damage. The Working Group report states:
Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the draft of this report in March 2025. Frontier AI companies’ [including OpenAI and Anthropic] own reporting reveals concerning capability jumps across threat categories.
To address these risks, AI developers like Meta, Google, OpenAI, and Anthropic have entered voluntary commitments to conduct safety testing and establish robust safety and security protocols. Several California-based frontier AI developers have designed industry-leading safety practices including safety evaluations and cybersecurity protections. SB 53 codifies these voluntary commitments to establish a level playing field and ensure greater accountability across the industry.
Background on the report
Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models in September 2024, following his veto of Senator Wiener’s SB 1047, tasking the group to “help California develop workable guardrails for deploying GenAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
The Working Group is led by experts including the “godmother of AI” Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society.
On June 17, the Working Group released their Final Report. While the report does not endorse specific legislation, it promotes a “trust, but verify” framework to establish guardrails that reduce material risks while supporting continued innovation.
SB 53 balances AI risk with benefits
Drawing on recommendations of the Working Group Report, SB 53:
- Establishes transparency into large companies’ safety and security protocols and risk evaluations. Companies will be required to disclose their safety and security protocols and risk evaluations in redacted form to protect intellectual property.
- Mandates reporting of critical safety incidents (e.g., model-enabled CBRN threats, major cyber-attacks, or loss of model control) within 15 days to the Governor’s OES.
- Protects employees who reveal evidence of critical risk or violations of the act by AI developers.
Under SB 53, the Attorney General imposes civil penalties for violations of the act. SB 53 does not impose any new liability for harms caused by AI systems.
World-first requirements
While the EU AI Act requires companies to disclose their safety and security plans and protocols, the disclosures are made privately to a government agency. SB 53’s disclosures are more high level, but they are required to be posted publicly to ensure greater accountability.
In addition, SB 53 contains a world-first requirement that companies disclose safety incidents involving dangerous deceptive behavior by autonomous AI systems. For example, developers typically place controls on AI systems to prevent systems from assisting with the construction of a bioweapon or other highly dangerous tasks set by users. If developers catch the AI system in a lie about how well those controls are working during routine testing, and that lie materially increases the risk of a catastrophic harm, the developer would be required to disclose that incident to the Office of Emergency Services under SB 53. These disclosure requirements around deceptive behavior are increasingly critical as AI systems are deployed in more and more consequential contexts.
SB 53 is sponsored by the Encode AI, Economic Security Action California, and the Secure AI Project.
Encode is proud to have worked with Senator Wiener and Governor Newsom on this landmark AI safety legislation. California has long been at the forefront of technological innovation, and it is only fitting that it now leads in setting common-sense safeguards to protect the public,” Nathan Calvin, General Counsel and VP of State Affairs at Encode AI. “As AI systems become more powerful and consequential, SB 53’s requirements—greater transparency in safety practices and strong protections for AI whistleblowers—are critical to ensuring that safety is built into innovation from the very beginning, rather than treated as an afterthought.
“By signing SB 53 into law, Governor Newsom recognizes that the tremendous innovative power of AI should benefit all of us, not just a handful of tech corporations and their investors,” said Teri Olle, Director of Economic Security California Action (ESCAA). “In addition to establishing common-sense transparency and safety standards, the creation of a public option for computing power through CalCompute will democratize access to critical AI infrastructure. This will make it possible for California’s brightest minds to build AI tools that solve our most pressing public problems for generations to come, and it arrives not a moment too soon. We’re immensely grateful to Senator Wiener for his tireless leadership and to the broad coalition of allies who worked to get SB 53 over the finish line.”
“The California Report on Frontier AI Policy acknowledged that ‘if those whose analysis points to the most extreme risks are right…then the stakes and costs for inaction on frontier AI at this current moment are extremely high,’" said Thomas Woodside, Co-Founder and Senior Policy Advisor at Secure AI Project. "Thanks to Senator Wiener’s tireless work and Governor Newsom’s leadership, California has taken a major step towards recognizing those stakes. Much work remains to be done, but this will be remembered as a foundational moment.”
###