Senator Wiener Introduces Safety Framework in Artificial Intelligence Legislation

September 13, 2023

SACRAMENTO – As the advancement of artificial intelligence (AI) continues to demonstrate increasingly astonishing capabilities, Senator Scott Wiener (D-San Francisco) unveiled SB 294, the Safety in Artificial Intelligence Act. SB 294 presents a framework for California to ensure the safe development of AI models within its borders. Under the framework, AI labs would be required to practice responsible scaling by testing the most advanced models rigorously for safety risks and disclose their planned responses to the State if safety risks are discovered. It also establishes strong liability for damages caused by foreseeable safety risks. To ensure California remains the center of AI innovation and guide the development of large AI models toward safe practices, the proposal would also create CalCompute, a cloud-based compute cluster available for use by AI researchers and smaller developers housed in California’s world-class public university system. 

 

SB 294 is an intent bill, a type of legislation that is ineligible to move through the typical legislative process at this stage of the year. Lawmakers sometimes use intent bills to generate discussion and feedback for a period of time before amending them with full legislative text and moving them through the formal legislative process. A bill to enact single-payer healthcare is currently employing a similar approach in the Assembly. Senator Wiener’s AI bill establishes the intent of the Legislature to enact sweeping safety rules governing AI development, a key first step before moving through the regular legislative process next year. Over the coming months, Senator Wiener looks forward to receiving significant feedback and engagement from a range of stakeholders inside and outside the industry.

 

SB 294 is among the first attempts at broad regulation of AI. Weeks ago, Governor Newsom issued an executive order on AI, and Congress is currently weighing two proposals to address the new technology. President Biden has also issued guidance in the space.

 

California has a long history leading the nation of critical technology issues. In 2018, the Legislature passed Senator Wiener’s SB 822, which established the nation’s strongest net neutrality protections.

 

Over the past decade, AI models (particularly large language models) have developed remarkable capabilities with astonishing speed. The introduction of ChatGPT in November 2022 demonstrated that “generative AI” had progressed to the point that companies could release products that produce output that is often indistinguishable from what a human might produce. 

 

Developers have lent these striking capabilities to a wide range of use cases. Startups are using AI to improve cybersecurity, and experts project it could soon play a vital support role in providing certain kinds of healthcare. Users are also reporting surprising applications of the technology, such as aiding the design of new parts for a spaceship. 

 

Alongside these beneficial capabilities are those that present new risks to public safety and security. There are early signs that future large language models could lower the barriers to causing large-scale societal harm, including through the misuse of biology, chemical weapons, and other dangerous technologies. Users could also prompt some models to write working cyberattack code, and one paper showed they could be used to conduct a phishing scam against elected officials by generating messages that appeared to be from constituents. AI models also allow these dangerous capabilities to be easily scaled to inflict massive harm. The creators of AI technology, from both academia and industry, are actively warning the rest of society about these risks.

 

Behind these rapid advances is a massive increase in the amount of computer hardware used to train the largest AI systems, ballooning the cost of developing the most advanced AI systems from tens to hundreds of millions of dollars for the most advanced AI engines. According to some analyses, the amount of effective computation used for the largest AI systems doubled roughly every 3.4 months for much of the last decade, an acceleration seven times faster than the famous Moore’s law, which drove the rapid development of computational power in consumer electronics over the past four decades. For these reasons, AI models are likely to improve their current capabilities further, and to suddenly develop new and surprising capabilities very rapidly. Thus, policymakers cannot afford to wait to engage with the technology: this technology is too important for us to wait to react until risks have already been realized. Good policy can help us contain risks at low cost while harnessing this technology's power for good.

 

An overwhelming share of this innovation and progress has occurred in California. The state is home to 35 of the world’s top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally. 

 

The Safety in AI Act proposes a framework to support this massive innovation and direct it in ways that minimize the risks AI poses to public safety.

 

The framework applies to companies that develop AI models at the frontier of the industry’s capabilities. The requirements would be triggered by the size of the computational power used to develop and train the model.

 

The SB 294 framework requires that companies developing models on the cutting-edge frontier of current capabilities present plans that detail what safety risks they are testing their models for, how they plan to improve their testing over time, what actions and additional safety measures they would take in response to warning signs of danger, and under what conditions they would think adding unpredictable new capacities to models would be dangerous.. The evaluations and disclosures must cover a range of safety risks resulting from the malicious use of the model, as well as unintended consequences of its use. The companies must also disclose their plans to ensure the safety of their models as they continue to scale to more advanced AI systems. A reviewing body would assess the disclosures and conduct additional audits as needed to ensure uniform transparency into safety precautions across the industry.

 

The SB 294 framework establishes strong liability for those who fail to take appropriate precautions to prevent both malicious uses and unintended consequences that threaten public safety. 

 

To nurture the benefits that AI practitioners hope to deliver, SB 294 proposes to launch CalCompute - an AI cloud compute cluster dedicated to research into the safe and secure development of large-scale AI systems. The system can foster innovation for small businesses while guiding AI development research toward safe and secure practices. CalCompute builds on Stanford’s National Research Cloud proposal, leveraging California’s world-class public university and community college system to advance the state of the art for an industry that has long called California home.

 

Finally, the SB 294 framework requires that commercial cloud computing companies institute Know Your Customer (KYC) policies on all offerings large enough to train frontier AI models. 

 

“Large-scale AI presents a range of opportunities and challenges for California, and we need to get ahead of them and not play catch up when it may be too late,” said Senator Wiener. “As a society, we made a mistake by allowing social media to become widely adopted without first evaluating the risks and putting guardrails in place. Repeating the same mistake around AI would be far more costly. At the same time, this technology shows incredible potential to improve people’s lives. We need to engage with this new technology, and direct the incredible innovation California is known for to chart a course for the rest of the world.”

 

“SB 294 is a framework we will fill in over the next several months by engaging closely with a range of researchers, industry leaders, security experts, and labor leaders,” Senator Wiener continued. “The best way to get feedback on an idea is to put legislative text in print, so after consulting a broad array of experts in industry and academia, we’ve introduced a framework we think addresses the most critical risks of the new technology while preserving and nurturing its incredible benefits. Now that the framework is public, we will collect feedback and continue refining the proposal throughout the fall in preparation to move a fully developed policy through the legislative process in January.”

 

“AI progress presents significant opportunities and significant risks,” said Paul Christiano, founder of Alignment Research Center (ARC), which was consulted in the development of the bill. “We can cut back most of the risk - while preserving almost all of the upside - if AI labs have clear policies for looking for early warning signs of dangerous capabilities and reacting appropriately. By requiring such policies California can hold labs accountable for improving safety measures in line with emerging capabilities, without meaningfully slowing down innovation.”

 

“It's great that between the legislature and the executive, California is taking AI regulation seriously,” tweeted Roy Bahat, Head of Bloomberg Beta. “California is the center of where so much AI gets -- and should continue to get -- built, and we have a critical role to play. Here's to pushing more public dollars for research along with appropriate limits on AI.”

 

“Regulation often responds to harms from new technologies at a lag after they happen,” said Katja Grace, Lead Researcher at AI Impacts. “With advanced artificial intelligence that method is unacceptably dangerous: the harms we foresee from uncontrolled AI development are many, potentially devastating, and will hit many aspects of life at once. And there will probably be more harms we don’t foresee. Yet development is fast and poised to fuel its own acceleration. I think this bill proposes many sensible and promising preparations for steering AI development toward its abundant promise, through the maze of potential affliction and catastrophe. I’m excited to see California taking a lead in beginning this incredibly important project.”

 

Read more about the Safety in Artificial Intelligence Act:

Exclusive: California Bill Proposes Regulating AI at State Level (TIME, September 13th)

###