麻豆官网首页入口免费

Sections

Commentary

Can California fill the federal void on frontier AI regulation?

Joshua Turner and
JT
Joshua Turner Research Intern - 麻豆官网首页入口免费
Nicol Turner Lee

June 4, 2024


  • Despite a flurry of bills, frameworks, and hearings, Congress has still failed to pass any legislation to either narrowly target specific AI risks or broadly ensure the responsible development and deployment of AI systems.
  • In this emerging patchwork of AI regulatory policy, California is uniquely positioned to have a crucial impact on AI governance.
  • California legislation does not need to be a perfectly comprehensive substitute for federal legislation鈥攊t just needs to be an improvement over the current lack of federal legislation.
U.S. President Joe 麻豆官网首页入口免费, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California, U.S., June 20, 2023.
U.S. President Joe 麻豆官网首页入口免费, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California, U.S., June 20, 2023. REUTERS/Kevin Lamarque

For 60 years, the discussion of artificial intelligence (AI) remained largely within university classrooms, industry research labs, and academic journals. This began to dramatically change as these technologies became more general purpose, and with the release of Open AI鈥檚 ChatGPT in November 2022, it is likely that the technology will be more of a household name. AI鈥檚 unprecedented capabilities have sparked widespread discussion of the norms needed to harness the benefits and mitigate the risks of AI. Many governments have emerged as leaders in shaping these norms, including , the , and the The United States has also taken steps to set an AI governance agenda, including releasing the , the National Institute of Standards and Technology , and the sweeping but .

However, these actions have come almost solely from the executive branch, which has limited regulatory powers. Despite a flurry of bills, frameworks, and hearings, Congress has still failed to pass any legislation to either narrowly target specific AI risks or broadly ensure the responsible development and deployment of AI systems. The House鈥檚 bipartisan plans to draft policy but the of AI development and the emerging risks that accompany it will not wait for the federal government to determine policy solutions. The more recent bipartisan Senate AI led by Senator Schumer (D-NY) also offers much promise鈥攑rovided the chamber subcommittees can quickly draft and enact legislation.

In this federal legislative vacuum, as today鈥檚 AI regulators. Some of the laws passed include measures to protect consumer data privacy (), build institutional understanding of AI (), prevent election interference (), and establish state AI task forces (), offices (), and advisory councils (). Recently, Utah passed a broader law () establishing liability, notice of interaction requirements, and an Office of AI Policy. However, in this emerging patchwork, one state is uniquely positioned to have a crucial impact on AI governance: California.

The promise of California as an AI regulator

Perhaps California鈥檚 efforts to influence AI policy, particularly, for frontier systems, is due to its status as an AI powerhouse, its large economy, and its Democratic penchant for regulation. The Golden State is home to 32 of Forbes鈥 and leading frontier players such as OpenAI, Anthropic, Meta, xAI, Google, and Microsoft, among others. OpenAI kickstarted the generative AI race with its release of ChatGPT back in 2022, and its GPT-4 large-language model is still the for AI users more than a year after its release. Anthropic is one of OpenAI鈥檚 major competitors; their recently released Claude 3 model on many standard performance benchmarks. Meta, the parent company of Facebook and Instagram, is a that produces open models鈥攖hose whose inner workings are released to the public for widespread inspection, modification, and execution鈥攁nd it just introduced its long-awaited model. The presence of such major AI developers makes California an attractive jurisdiction for advancing responsible AI policy. Furthermore, California produces 14.5% of the United States GDP, and if it were a sovereign country, it would have the world鈥檚 fifth- or sixth-largest economy behind the U.S. (without California), China, Japan, Germany, and India.

California鈥檚 dual role as an AI hotspot and a powerful economy might enable their state AI regulation to accomplish many of the benefits of standard-setting legislation, such as requiring responsible development and deployment of frontier systems, mitigating theft of powerful dual-use models by malicious actors, and ensuring legal liability for AI harms. A newly introduced bill provides a model for such regulation and may garner many of these benefits for Californians and, incidentally, everyone else. In February 2024, State Senator Scott Wiener (D-San Francisco) introduced the . The bill鈥檚 primary aim is to mitigate large potential risks posed by future frontier models, including the automation of large-scale cyberattacks and the production of novel biological weapons. It would address these risks by requiring the developers of a model trained using large amounts of computing power to demonstrate the safety of their system. If they were unable to make such a demonstration, they would be subject to additional requirements, including submitting yearly certifications of compliance with safety standards, implementing emergency shutdown protocols, and strengthening cybersecurity protections to prevent unauthorized access.

The bill also includes several other key provisions to increase safety. It would require providers of computing clusters to verify the identity and goal of customers seeking large amounts of computing power for AI training. It would also mandate that developers report safety incidents, including hazardous use and model theft. The state attorney general would be empowered to prosecute AI-related damages and threats to public safety. Additionally, the bill would establish a subsidized public computing cluster to democratize access to expensive AI training鈥攃utting-edge AI-specialized computer chips cost . To oversee all these requirements, the bill would create the Frontier Model Division within the Department of Technology. Through its ambitious and broad provisions, this bill, if passed, could capture the promise of AI concentration in California.

Not only might California policies, like Senator Wiener鈥檚, be able to achieve many of the benefits of national legislation, but passing such policies may also be easier in California than at the federal level. First, California鈥檚 government is overwhelmingly Democratic and therefore less subject to gridlock and polarization than is the case nationally. Governor Gavin Newsom is a Democrat and the state and have wide Democratic margins. 聽The absence of partisan-based gridlock makes it easier to pass laws and regulations in California than places with greater party competition. Second, California has a of passing groundbreaking technology regulation, such as an Internet of Things security law and the California Consumers Privacy Act (CCPA). These considerations may make it easier to advance AI policies in California than at the federal level.

The limits of California as an AI regulator

Despite its promise, there may be some limits to California鈥檚 power. First, as with all regulations, there is the risk that the regulated companies will attempt to weaken or circumvent the policies impacting their business. For example, lobbying efforts by and French startup successfully influenced the EU AI Act to favor their business. Circumventing location-based regulations becomes more difficult if AI policies apply to all companies doing business in the state, as opposed to only those incorporated in the state, with the CCPA. Although such a jurisdictional requirement makes evasion difficult, some companies may be willing to bite the bullet and leave California markets to avoid compliance burdens. For instance, Anthropic in the EU, which is famous for its rigorous digital regulations.

Second, laws that impose sweeping requirements on interstate AI operations might run afoul of the Dormant Commerce Clause (DCC). The DCC is an inferred consequence of the Commerce Clause which limits the ability of states to overburden interstate or international commerce. have accused the CCPA of violating the DCC for the requirements it makes on out-of-state business activities without achieving commensurate social benefit for Californians. While the CCPA has not been directly challenged in court, the social, economic, and geopolitical importance of AI might incentivize cases against the constitutionality of far-reaching California AI legislation.

Third, there are AI policy goals that California cannot achieve due to its inherent limitations as a state government. The state cannot wield key policy tools for maintaining the U.S.鈥檚 international AI advantage, such as imposing on AI-specialized chips or altering visa processes for to attract foreign AI talent. It is also limited in other ways, such as in its ability to pursue international cooperation or influence military use.

Fourth, even if a California AI law succeeds in partially filling the regulatory void left by congressional inaction, it could interfere with or prevent the passage of comprehensive federal legislation. The (ADPPA) of 2022 promised comprehensive data privacy protections for all Americans. However, Californians in Congress worried that the Act would provide weaker protections for their constituents than the already-enacted CCPA. Despite showing that the ADPPA actually provided stronger privacy protections overall, resistance from Californians (including then-Speaker Nancy Pelosi), combined with partisan disagreement over a exempting the CCPA from preemption, held up the bill long enough for the congressional session to expire. Although the outcome may have been contingent on unique situational factors, such as the most powerful member of Congress being from California, it still serves as a cautionary tale of how state legislation can block more widespread protections.

Finally, California is not the only state seeking to set the national agenda. and have been legislating on technology, developing regulations, and filing . Unlike the Golden State, those locales are dominated by Republicans and are enacting rules from a conservative point of view. How the states work out their differences and what tech companies do in the face of competing mandates from liberal and conservative states could limit the power of any one state to shape the national landscape.

The future of California as an AI regulator

Despite these potential limits, California still holds promise for directly requiring responsible AI development and deployment inside and outside its borders. California legislation does not need to be a perfectly comprehensive substitute for federal legislation鈥攊t just needs to be an improvement over the current lack of federal legislation. The state鈥檚 power in the AI industry and economic presence can give it broad control in regulating the technology. Even if the above limits do prevent California from directly accomplishing wider AI regulation, it will still have substantial indirect influence, because the state is a in public policy; in many cases, California regulations have spread to other jurisdictions. Regardless, California will certainly continue to play a key role in shaping the United States鈥 AI policy response in the coming years. While Congress lags and AI surges forward, the Golden State can help the country keep pace in requiring responsible AI.

Authors

  • Acknowledgements and disclosures

    Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.