California Takes Stand Against AI Overregulation
In a pivotal moment for artificial intelligence policy, California Governor Gavin Newsom vetoed SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” on September 29, 2024. This decision marks a significant shift away from sweeping regulation of AI technologies, prioritizing innovation while still addressing safety concerns. For more context on this decision, you can read about how California has rejected AI regulatory extremism.

The Pitfalls of SB 1047
SB 1047 proposed a comprehensive regulatory framework for advanced AI systems based on hypothetical risks. It aimed to establish arbitrary thresholds for “frontier” AI models and potential “critical harms,” while creating a new regulatory bureaucracy with extensive reporting and auditing requirements.
Critics, including former House Speaker Nancy Pelosi, argued that the bill would have stifled innovation and harmed the U.S. AI ecosystem. Its extraterritorial reach could have allowed California to regulate AI companies across America, potentially creating a compliance nightmare if other states followed suit. For additional insights on the implications of such legislation, you can check out the California CDT’s overview of legislation.

The Core Issue: Regulating Mechanisms vs. Outcomes
At its heart, SB 1047 violated a fundamental principle of effective technology policy: regulation should focus on real-world outputs and system performance rather than underlying capabilities. Rep. Jay Obernolte, chair of the House AI Task Force, emphasized the importance of avoiding policies “that stifle innovation by focusing on mechanisms instead of on outcomes.”
This approach risked crippling America’s AI capabilities at a time when global competition, particularly from countries like China, is intensifying. Instead, policymakers should use scientific evidence and cost-benefit analysis to evaluate specific AI applications and address provable risks. The Berkeley Tech Policy Initiative is one such resource that advocates for informed policymaking in this area.
Existing Regulatory Frameworks
It’s crucial to recognize that the U.S. already has extensive regulatory mechanisms in place. At the federal level, 439 departments and dozens of independent agencies possess long-standing tools to address algorithmic developments in their respective areas. Agencies like the Federal Aviation Administration, Food and Drug Administration, and National Highway Traffic Safety Administration already regulate autonomous and algorithmic systems in their sectors.
While existing regulations may not always be applied optimally, it would be misguided to assume that the government lacks the power to address new technological concerns. The combination of targeted regulation and tort liability offers a more nuanced approach to addressing AI-related issues.
The Road Ahead
Following Newsom’s veto, the debate over AI regulation is likely to shift to the federal level. Several major federal AI bills under consideration in Congress would empower the National Institute of Standards and Technology (NIST) to play a larger role in overseeing algorithmic systems, including frontier model safety.
At the state level, future legislation may follow the approach taken by Colorado’s AI bill, signed into law by Governor Jared Polis in May 2024. This bill focuses on preventing “algorithmic discrimination” through impact assessments and audits. While different from SB 1047, such measures still raise concerns about their impact on innovation and competition.
A Call for Balanced Regulation
As the AI regulation debate continues into 2025, policymakers must embrace humility and forbearance when addressing this rapidly evolving technology. Governor Newsom emphasized that “any framework for effectively regulating AI needs to keep pace with the technology itself.”
This underscores the need for targeted, iterative policy responses rather than sweeping measures like SB 1047. The goal should be to protect against actual threats without unnecessarily hindering the potential of AI to advance the public good.
California’s rejection of AI regulatory extremism sets an important precedent for future policymaking. It highlights the delicate balance between ensuring safety and fostering innovation in the AI sector. As we move forward, it’s crucial that regulations are grounded in scientific evidence, focus on real-world outcomes, and remain flexible enough to adapt to this rapidly changing field. For real-time updates on this topic, you can follow the conversation on Twitter or connect with experts on LinkedIn.
Frequently Asked Questions
What was the main outcome of Governor Gavin Newsom’s veto of SB 1047?
Governor Newsom’s veto of SB 1047 marked a significant shift away from sweeping regulation of AI technologies, prioritizing innovation while still addressing safety concerns.
What were the major criticisms of SB 1047?
Critics argued that SB 1047 would stifle innovation, create a compliance nightmare for AI companies, and impose arbitrary thresholds for AI models based on hypothetical risks.
Why is regulation focusing on outcomes rather than mechanisms important?
Focusing on outcomes rather than mechanisms ensures that regulations do not hinder innovation and that they address real-world performance of AI systems, which is crucial for maintaining competitiveness in the global AI landscape.
What existing regulatory frameworks are already in place for AI in the U.S.?
The U.S. has extensive regulatory mechanisms, with 439 departments and independent agencies that regulate algorithmic developments in specific sectors, such as the Federal Aviation Administration and the Food and Drug Administration.
What is the potential future direction of AI regulation following the veto?
The debate over AI regulation is expected to shift to the federal level, with several major bills under consideration that would empower the National Institute of Standards and Technology (NIST) to oversee algorithmic systems.
How does Colorado’s AI bill differ from SB 1047?
Colorado’s AI bill focuses on preventing “algorithmic discrimination” through impact assessments and audits, contrasting with SB 1047’s broad and comprehensive regulatory framework.
What does Governor Newsom mean by the need for humility and forbearance in AI regulation?
Governor Newsom emphasizes that policymakers should approach AI regulation with caution and flexibility, adapting regulations to keep pace with rapidly evolving technology without imposing excessive restrictions.
Why is balancing safety and innovation crucial in AI policymaking?
Balancing safety and innovation is essential to protect against actual threats posed by AI while ensuring that regulations do not stifle the potential benefits and advancements that AI can offer to society.
What role does scientific evidence play in AI regulation?
Scientific evidence is vital for creating informed regulations that focus on real-world outcomes, allowing policymakers to effectively evaluate specific AI applications and address provable risks.
What precedent does California’s rejection of SB 1047 set for future AI policymaking?
California’s rejection of SB 1047 sets a precedent for future policymaking by highlighting the need for targeted, evidence-based regulations that prioritize innovation while ensuring safety in the rapidly evolving AI sector.
Regulation without practicality is just bureaucracy in disguise. California’s move against SB 1047 is a necessary pushback against policies that prioritize theoretical fears over actual innovation. We need to ensure that regulations serve real-world performance rather than creating compliance hurdles that slow progress. Embracing a more evidence-based approach can foster the innovation we need to stay competitive on a global scale. Let’s not be left behind in the race for AI advancement.