WHAT THEY ARE SAYING: Senator Wiener’s Landmark AI Bill Awaits Newsom’s Decision
SB 53 enacts the nation’s first transparency requirements for safety plans on the most advanced AI models, establishes a public cloud compute cluster to foster democratized AI innovation, and enacts protections for whistleblowers at leading AI labs.
SAN FRANCISCO – Last week, the California Legislature passed Senator Scott Wiener (D-San Francisco)’s Senate Bill (SB) 53 in a bipartisan vote,. The bill enacts recommendations of a working group of some of the world’s leading AI experts convened by Governor Newsom last year to require the largest AI companies to publicly disclose their safety and security protocols, report the most critical safety incidents, and protects whistleblowers. It also advances an industrial policy for AI by creating “CalCompute,” a public cloud compute cluster that provides AI infrastructure for startups and researchers.
A range of AI researchers, startup founders, and civil society groups cheered the passage of the landmark legislation:
"I’m a software entrepreneur who has been in the tech industry since 1984. AI is the most exciting technology that I’ve seen. SB 53 will make sure we have information about risks from this technology. This bill is a reasonable and well-informed approach, and I urge Governor Newsom to sign it.” said Steve Newman, technical co-founder of eight technology startups, including Writely (which became Google Docs), and co-creator of Spectre, one of the most influential video games of the 1990s.
“Researchers like myself as well as the leaders of major AI companies agree that AI could potentially cause catastrophes in the near future” said Stuart Russell, Distinguished Professor of Computer Science at UC Berkeley, “But right now, we have fewer legal requirements on these companies than we do on sandwich shops. I applaud the legislature for grappling with this issue in such a thoughtful manner and I urge the Governor to sign SB 53 into law.”
“This is a triumph for transparency and trust,” said Bruce Reed, Head of AI, Common Sense Media. “For America to win the AI race and realize AI’s enormous promise, Americans need to know and trust that it’s safe. Senator Wiener and Governor Newsom’s working group deserve great credit for a landmark achievement.”
“My company is built on the premise that AI can help people make more informed evidence-based decisions,” said Andreas Stuhlmüller, CEO of Elicit, “SB 53 will help give us all the evidence we need to make informed policy decisions about AI. I am pleased that it passed the legislature with bipartisan support.”
“In my research at UC Berkeley, I study how we can improve policies and practices to help AI companies manage the risks of AI technologies, especially significant risks to society” said Jessica Newman, Director of the AI Security Initiative at the UC Berkeley Center for Long-Term Cybersecurity (Professor Newman speaks here in her personal capacity, not on behalf of UC Berkeley), “By building on AI companies' voluntary commitments, SB 53 will produce public information that helps surface signs of threats in a timely manner.”
“With the passage of SB 53, California is proving once again that smart guardrails don’t slow innovation—they supercharge it,” said Mike Kubanzky, CEO of Omidyar Network. “This bill democratizes AI, safeguards the public, and gives companies the clarity they need—setting a bold model for the nation to follow.”
“Self-driving cars were the first AI agents, and transparency made the industry safer,” said Darius Emrani, CEO of Scorecard and a former autonomous vehicle engineer. “Now I run a platform for AI agent builders, and we have the opportunity to use this proven approach to transparency to accelerate the safe deployment of AI agents.”
"At the Stanford Center for AI Safety, we embrace the promise and potential of AI, and we believe that carefully considering the potential risks of AI technologies is a crucial part of ensuring that we are able to fulfill that potential in a way that is safe and beneficial for humanity,” said Clark Barrett, Professor (Research) of Computer Science at Stanford University and Co-Director of the Stanford Center for AI Safety. “SB 53 is a positive step toward finding that appropriate balance between progress and prudence.”
As AI advances, risks and benefits grow
Recent advances in AI have delivered breakthrough benefits across several industries, from accelerating drug discovery and medical diagnostics to improving climate modeling and wildfire prediction. AI systems are revolutionizing education, increasing agricultural productivity, and helping solve complex scientific challenges.
However, the world’s most advanced AI companies and researchers acknowledge that as their models become more powerful, they also pose increasing risks of catastrophic damage. The Working Group report states:
Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the draft of this report in March 2025. Frontier AI companies’ [including OpenAI and Anthropic] own reporting reveals concerning capability jumps across threat categories.
To address these risks, AI developers like Meta, Google, OpenAI, and Anthropic have entered voluntary commitments to conduct safety testing and establish robust safety and security protocols. Several California-based frontier AI developers have designed industry-leading safety practices including safety evaluations and cybersecurity protections. SB 53 codifies these voluntary commitments to establish a level playing field and ensure greater accountability across the industry.
Background on the report
Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models in September 2024, following his veto of Senator Wiener’s SB 1047, tasking the group to “help California develop workable guardrails for deploying GenAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
The Working Group is led by experts including the “godmother of AI” Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society.
On June 17, the Working Group released their Final Report. While the report does not endorse specific legislation, it promotes a “trust, but verify” framework to establish guardrails that reduce material risks while supporting continued innovation.
SB 53 balances AI risk with benefits
Drawing on recommendations of the Working Group Report, SB 53:
- Establishes transparency into large companies’ safety and security protocols and risk evaluations. Companies will be required to disclose their safety and security protocols and risk evaluations in redacted form to protect intellectual property.
- Mandates reporting of critical safety incidents (e.g., model-enabled CBRN threats, major cyber-attacks, or loss of model control) within 15 days to the Governor’s OES.
- Protects employees who reveal evidence of critical risk or violations of the act by AI developers.
Under SB 53, the Attorney General imposes civil penalties for violations of the act. SB 53 does not impose any new liability for harms caused by AI systems.
World-first requirements
While the EU AI Act requires companies to disclose their safety and security plans and protocols, the disclosures are made privately to a government agency. SB 53’s disclosures are more high level, but they are required to be posted publicly to ensure greater accountability.
In addition, SB 53 contains a world-first requirement that companies disclose safety incidents involving dangerous deceptive behavior by autonomous AI systems. For example, developers typically place controls on AI systems to prevent systems from assisting with the construction of a bioweapon or other highly dangerous tasks set by users. If developers catch the AI system in a lie about how well those controls are working during routine testing, and that lie materially increases the risk of a catastrophic harm, the developer would be required to disclose that incident to the Office of Emergency Services under SB 53. These disclosure requirements around deceptive behavior are increasingly critical as AI systems are deployed in more and more consequential contexts.
SB 53 is sponsored by the Encode AI, Economic Security Action California, and the Secure AI Project.
###