Senator Wiener Introduces Legislation to Ensure Safe Development of Large-Scale Artificial Intelligence Systems and Support AI Innovation in California

February 8, 2024

SACRAMENTO – Today, Senator Scott Wiener (D-San Francisco) introduced SB 1047, a bill that seeks to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. These duties do not apply to startups developing less potent models. The bill also invests in the future of California’s AI industry by establishing CalCompute, a public cloud computing cluster that will allow startups, researchers, and community groups to participate in the development of large-scale AI systems. 

This announcement comes as today, the U.S. Department of Commerce’s National Institute for Standards and Technology (NIST) launches the U.S. AI Safety Institute Consortium (AISIC), the nation’s first body dedicated to developing robust AI measurements and evaluations. SB 1047 will carry AISIC’s work forward in California, helping codify those best practices in statute. 

“Large-scale artificial intelligence has the potential to produce an incredible range of benefits for Californians and our economy—from advances in medicine and climate science to improved wildfire forecasting and clean power development,” says Senator Wiener. “It also gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks. SB 1047 does just that, by developing responsible, appropriate guardrails around development of the biggest, most high-impact AI systems to ensure they are used to improve Californians’ lives, without compromising safety or security.

“SB 1047 will also promote the growth of the AI industry by establishing CalCompute, a public AI research cluster that will allow startups, researchers, and community groups to participate in the development of large-scale AI systems. By providing a broad range of stakeholders with access to the AI development process, CalCompute will help align large-scale AI systems with the values and needs of California communities.”

California has become a vibrant hub for artificial intelligence. Universities, startups, and technology companies are using Al to accelerate drug discovery, coordinate wildfire responses, optimize energy consumption, and enhance creativity. Artificial intelligence has enormous potential to benefit our state, our nation, and the world. California must support and invest in the development and use of this technology  in government, in businesses, and in civil society to ensure that it remains at the cutting edge of dynamic innovation in AI development.

At the same time, scientists, engineers, and business leaders at the forefront of this technology, including the three most cited machine learning researchers of all time, have repeatedly warned policymakers that failure to take appropriate precautions could have severe consequences. In the future, the most powerful AI models could pose serious dangers to public safety and national security if they are developed recklessly. These dangers include risks to critical infrastructure and the threat of novel biological weapons and cyberattacks. California must ensure that the small handful of companies developing extremely powerful models take reasonable care to prevent their models from causing serious harm.

For any new technology to succeed in being adopted widely, potential users have to trust that it is both safe and compatible with their needs and values. Subscribing to a set of core duties to ensure safety is a good basis for developers to earn this trust, and establishing a new compute cluster that gives researchers, startups, and community groups the opportunity to participate in AI development will help ensure this new technology aligns with the needs and values of the communities it seeks to serve.

AI developers in California have already taken important first steps in pioneering safe development practices. But California’s government cannot afford to be complacent. With Congress paralyzed and the future of the Biden Administration’s Executive Order in doubt, California has an indispensable role to play in ensuring that we develop this extremely powerful technology with basic safety guardrails, in order to allow society to experience AI’s massive potential benefits. Clarifying that developers of the largest and most powerful AI models must take basic precautions, in line with  industry-leading best practices, is the clear way forward. It won’t address all of AI’s risks and harms, but it is an indispensable step. And no state is better equipped to take on this challenge than California.

SB 1047 builds on the foundation of important measures already taken by California’s government. In September 2023, Governor Newsom issued an Executive Order directing state agencies to begin preparing for generative AI. The Administration released a report in November examining AI’s most beneficial uses and potential harms, and the Governor has also outlined a timeline for state agencies to develop guidelines for assessing the impact of AI on vulnerable communities. SB 1047 complements and reinforces these initiatives. 

As California ramps up its efforts to address AI, the Biden Administration has begun to set new standards for this technology, with its Blueprint for an AI Bill of Rights and AI Executive Order. Several large-scale developers in California have made voluntary commitments to pioneer safe development practices—including cybersecurity protections and safety evaluations of AI system capabilities. But experts remain concerned about the risks that might emerge in future generations of this technology. According to one recent survey, 70% of AI researchers believe safety should be prioritized in AI research more than it is today, while 73% expressed “substantial” or “extreme” concern that AI would fall into the hands of dangerous groups. 

SB 1047 sets out clear standards for developers of extremely powerful AI systems - systems that meet the bill’s threshold of 10^26 flop would cost over $100,000,000 to train, and would be substantially more powerful than any system that exists today. Specifically, SB 1047 clarifies that developers of these models must take basic precautions such as pre-deployment safety testing and cybersecurity protections. These responsibilities only apply to the handful of extremely large AI developers; the bill creates no new obligations for startups or business customers of AI products. If the developer of an extremely powerful model causes severe harm to Californian citizens by behaving irresponsibly, or if the developer’s negligence poses an imminent threat to public safety, the Attorney General of California can hold the developer accountable.

To help ensure California remains the world leader in AI innovation, SB 1047 also creates a new public cloud-computing cluster, CalCompute, that will conduct research into the safe and secure deployment of large-scale AI models, while allowing smaller startups, researchers, and community groups to participate in the development of large-scale AI systems. The bill also creates a new advisory council to advocate for and support safe and secure open-source AI development, and requires cloud-computing companies and frontier model developers to provide transparent pricing and avoid price discrimination.

SB 1047 is sponsored by the Center For AI Safety Action Fund, Encode Justice, and Economic Security California.

Here’s what people are saying about SB 1047:

Leading AI Scientists:

Yoshua Bengio and Geoffrey Hinton are the two most highly cited machine learning researchers in the world. Both received the Turing Award - the Nobel Prize equivalent for computer science - for pioneering the study of machine learning. Both “godfathers” of machine learning have spoken out about the importance of this bill. 

Of SB 1047, Professor Hinton said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. 

“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.” 

Professor Bengio said: “AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety. Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I've recommended to legislators." 

San Francisco Startup & AI Leaders:

“SB 1047 just seems like good, reasonable policy, and we’re excited to see support academic research and startups within the bill,” said Evan Conrad, founder of the San Francisco Compute Company.

“AI has immense potential to spur innovation. But that can only happen if we take steps to ensure we innovate responsibly,” said Eric Ries, Co-Founder of Answer.AI and author of The Lean Startup. “We need to encourage massive labs to take basic precautions, without burdening the smaller companies in California’s startup ecosystem.”

National Security Leaders:

“I’m optimistic about this balanced legislative proposal that picks up where the White House Executive Order on AI left off,” said Lieutenant General John (Jack) Shanahan, United States Air Force, Retired. “SB 1047 addresses at the state level the most dangerous, near-term potential risks to civil society and national security in practical and feasible ways. It thoughtfully navigates the serious risks that AI poses to both civil society and national security, offering pragmatic solutions. It lays the groundwork for further dialogue among the tech industry and federal, state, and local governments, aiming for a harmonious balance between fostering American innovation and protecting the country.” Shanan previously served as the inaugural director of the U.S. Department of Defense (DoD) Joint Artificial Center (JAIC).

“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Hon. Andrew C. Weber, former Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programs. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.”

Responsible AI Advocates/Sponsors of SB 1047:

“Large-scale artificial intelligence must be steered in a direction that benefits society—and that requires us to improve coordination between government, industry, and communities on tech safety issues,” says Sneha Revanur, President and Founder of Encode Justice, a co-sponsor of the bill. “This bill is a critical piece of this puzzle, outlining a set of responsible, appropriate guardrails that will protect our generation and future generations of Californians from the risks of large-scale AI systems. It’s incredible to see youth input embedded in this landmark bill.” 

“California’s state government has an essential role to play in ensuring the state recognizes AI’s benefits and adopts industry-leading best practices to avoid its most severe risks, while also making sure AI innovations are accessible to academic researchers and startups. Senator Wiener’s legislation accomplishes all of these goals, and we’re proud to support it.” says Nathan Calvin, Senior Policy Counsel at the Center for AI Safety Action Fund, a co-sponsor of SB 1047. 

“The AI market is dominated by a handful of corporate actors and this essential legislation takes the first critical step in fostering greater innovation and openness to serve the public interest. With the development of a public cloud like CalCompute, we can harness the potential of AI for good, accessing data to advance the next breakthroughs in healthcare, technology and disaster response,” says Teri Olle, Director of Economic Security California. “This new framework will be groundbreaking for all Californians, and Economic Security California applauds Senator Wiener for spearheading this endeavor.”

Read more about SB 1047 and the state and federal governments’ recent efforts to develop safeguards around large-scale artificial intelligence systems: