Senator Wiener Introduces Legislation to Protect AI Whistleblowers & Boost Responsible AI Development
SAN FRANCISCO – Senator Scott Wiener (D-San Francisco) introduced SB 53, legislation to promote the responsible development of large-scale artificial intelligence (AI) systems. AI is already delivering incredible benefits, promising to revolutionize a huge range of critical fields, from mental health diagnostics to wildfire response. At the same time, most top AI researchers warn that the technology presents substantial risks, and luminaries like Nobel laureate Geoffrey Hinton have called for immediate guardrails to manage those risks.
SB 53 balances the need for safeguards on the development of AI with the need to accelerate progress toward its benefits. The bill bolsters protections for whistleblowers, who are key to alerting the public about developing AI risks. It also takes the first step toward establishing CalCompute, a research cluster to support startups and researchers developing large-scale AI. The bill helps California secure its global leadership as states like New York establish their own AI research clusters.
When Governor Newsom vetoed SB 1047, last year’s landmark AI legislation authored by Senator Wiener, he announced the state will work with a group of academics and civil society leaders to develop safeguards to regulate artificial intelligence. That group is expected to issue an interim report in the coming weeks or months, and a final report over the summer.
“The greatest innovations happen when our brightest minds have the resources they need and the freedom to speak their minds. SB 53 supports the development of large-scale AI systems by providing low-cost compute to researchers and start-ups through CalCompute. At the same time, the bill also provides critical protections to workers who need to sound the alarm if something goes wrong in developing these highly advanced systems,” said Senator Wiener. “We are still early in the legislative process, and this bill may evolve as the process continues. I’m closely monitoring the work of the Governor’s AI Working Group, as well as developments in the AI field for changes that warrant a legislative response. California’s leadership on AI is more critical than ever as the new federal Administration proceeds with shredding the guardrails meant to keep Americans safe from the known and foreseeable risks that advanced AI systems present.”
California is home to the world’s largest concentration of leading AI companies and researchers. Yet other jurisdictions are investing heavily in AI, hoping to secure leadership in the emerging space. Most recently, New York Governor Kathy Hochul announced obtaining over $90 million in new funding for their state AI consortium aiming to advance responsible AI innovation.
Research clusters can be essential to spurring innovation. Developing today’s advanced AI models requires the use of expensive computing resources, which can put advanced AI development out of reach for many startups and researchers. SB 53 would help rectify that problem by providing low-cost compute and other resources to aid responsible AI development.
“California has a history of making important public investments where it counts: from stem cell research to our stellar higher education system, we have led the way in using public dollars to foster the American entrepreneurial spirit,” said Teri Olle, Director of Economic Security for California Action. “We are proud to sponsor SB 53, a bill that would continue in this important tradition by creating the infrastructure for a public option for cloud computing needed in AI development. If we want to see AI used to promote the public good and make life better and easier for people, then we must broaden access to the computing power required to fuel innovation.”
“With CalCompute, we're democratizing access to the computational resources that power AI innovation," said Sunny Gandhi, Vice President of Political Affairs at Encode. "And by protecting whistleblowers, we're ensuring that security isn't sacrificed for speed. California can be a leader by making transformative technology both more accessible and more transparent.”
The Trump Administration has rescinded President Biden’s Executive Orders establishing guardrails on the most advanced AI systems. Employees at the National Institute of Standards and Technology, the federal body that sets safety standards for artificial intelligence, are also bracing for the potential of massive layoffs.
AI researchers continue to speak out about the need to impose guardrails to manage the risks presented by AI. Last month, AI Godfather Professor Yoshua Bengio announced the world’s first-ever International AI Safety Report – a 298 page document created by 100 independent AI experts globally.
Whistleblowers—employees of the largest labs who speak out about safety and security risks posed by the advanced AI systems they are developing—have emerged as a key window into the safety practices of large AI labs and the emergent capabilities of large-scale AI models.
SB 53 recognizes that we need a multipronged approach to encourage responsible AI innovation. It does so by:
- Ensuring employees of frontier model labs are not retaliated against for reporting their belief that a developer’s activities pose a critical risk, as defined, or has made false or misleading statements about its management of critical risk; and
- Establishing a process to create a public cloud-computing cluster (CalCompute) that will conduct research into the safe and secure deployment of large-scale artificial intelligence (AI) models.
SB 53 is sponsored by Encode, Economic Security Action CA, and Secure AI.
###