Senator Wiener Responds to OpenAI Opposition to SB 1047
“What’s notable about the OpenAI letter is that it doesn’t criticize a single provision of the bill. I’m extremely appreciative that OpenAI appears to acknowledge that the bill’s specific core provisions — performing a safety evaluation for foreseeable risks on massively powerful AI models before releasing them and having the ability to shut down a massively powerful model that’s in your possession — are reasonable and implementable. Indeed, OpenAI has repeatedly committed to perform these safety evaluations.
“Instead of criticizing what the bill actually does, OpenAI argues this issue should be left to Congress. As I’ve stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so. Under OpenAI’s argument about Congress, California never would have passed its data privacy law, and given Congress’s lack of action, Californians would have no protection whatsoever for their data.
“OpenAI also raises national security concerns. Far from undermining national security, SB 1047’s requirements that AI companies thoroughly test their products for the ability to cause catastrophic harm can only strengthen our national security. That is why major national security figures — Lieutenant General John (Jack) Shanahan (U.S. Air Force) and Andrew Weber, former Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs — have endorsed SB 1047. Attached are their public statements in support of the bill.
“Finally, OpenAI claims that companies will leave California if the bill passes. This tired argument — which the tech industry also made when California passed its data privacy law, with that fear never materializing — makes no sense given that SB 1047 is not limited to companies headquartered in California. Rather, the bill applies to companies doing business in California. As a result, locating outside of California does not avoid compliance with the bill.
“Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk. We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”
SB 1047 has received support from major national security experts:
“I’m optimistic about this balanced legislative proposal that picks up where the White House Executive Order on AI left off,” said Lieutenant General John (Jack) Shanahan, United States Air Force, Retired. “SB 1047 addresses at the state level the most dangerous, near-term potential risks to civil society and national security in practical and feasible ways. It thoughtfully navigates the serious risks that AI poses to both civil society and national security, offering pragmatic solutions. It lays the groundwork for further dialogue among the tech industry and federal, state, and local governments, aiming for a harmonious balance between fostering American innovation and protecting the country.” Shanahan previously served as the inaugural director of the U.S. Department of Defense (DoD) Joint Artificial Center (JAIC).
“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Hon. Andrew C. Weber, former Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programs. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.”