California Governor Gavin Newsom speaks in the course of the 2025 Clinton World Initiative (CGI) in New York Metropolis, U.S., September 24, 2025.
Kylie Cooper | Reuters
California Governor Gavin Newsom signed into state legislation on Monday a requirement that ChatGPT developer OpenAI and different massive gamers disclose how they plan to mitigate potential catastrophic dangers from their cutting-edge AI fashions.
California is the house to high AI corporations together with OpenAI, Alphabet’s Google, Meta Platforms Nvidia and Anthropic, and with this legislation seeks to steer on regulation of an trade essential to its economic system, Newsom mentioned.
“California has confirmed that we are able to set up rules to guard our communities whereas additionally guaranteeing that the rising AI trade continues to thrive,” Newsom mentioned in a press launch.
Newsom’s workplace mentioned the legislation, often known as SB 53, fills a spot left by the U.S. Congress, which thus far has not handed broad AI laws, and gives a mannequin for the U.S. to observe.
If federal requirements are put in place, Newsom mentioned, the state legislature ought to “guarantee alignment with these requirements – all whereas sustaining the excessive bar established by SB 53.”
Final yr, Newsom vetoed California’s first try at AI laws, which had confronted fierce trade pushback. The invoice would have required corporations that spent greater than $100 million on their AI fashions to rent third-party auditors yearly to assessment danger assessments and allowed the state to levy penalties within the a whole lot of tens of millions of {dollars}.
The brand new legislation requires corporations with greater than $500 million in income to evaluate the danger that their cutting-edge know-how might break freed from human management or assist the event of bioweapons, and disclose these assessments to the general public. It permits for fines of as much as $1 million per violation.
Jack Clark, co-founder of AI firm Anthropic, known as the legislation “a powerful framework that balances public security with continued innovation.”
The trade nonetheless hopes for a federal framework that will change the California legislation, in addition to others prefer it enacted lately in Colorado and New York. Final yr, a bid by some Republicans within the U.S. Congress to dam states from regulating AI was voted down within the Senate 99-1.
“The most important hazard of SB 53 is that it units a precedent for states, reasonably than the federal authorities, to take the lead in governing the nationwide AI market – making a patchwork of fifty compliance regimes that startups haven’t got the assets to navigate,” mentioned Collin McCune, head of presidency affairs at Silicon Valley enterprise capital agency Andreessen Horowitz.
U.S. Consultant Jay Obernolte, a California Republican, is engaged on AI laws that might preempt some state legal guidelines, his workplace mentioned, though it declined to remark additional on pending laws.
Some Democrats are additionally discussing tips on how to enact a federal customary.
“It is not whether or not we’re gonna regulate AI, it is would you like 17 states doing it, or would you like Congress to do it?” U.S. Consultant Ted Lieu, a Democrat from Los Angeles, mentioned at a latest listening to on AI laws within the U.S. Home of Representatives.
