California Gov. Gavin Newsom has halted a landmark artificial intelligence (AI) security law that was the nation’s most ambitious effort to regulate the new technology.
The law, SB 1047, was intended to ensure the security of large AI systems before their release. It also proposed giving the attorney general the right to sue companies for serious harm caused by their technologies and required the implementation of an emergency shutdown mechanism in cases of biowarfare, mass casualties or property damage.
Newsom justified his veto by saying that the bill only targets the largest AI systems, the so-called “frontier models,” without taking into account the potential risks and harm from the technology as a whole. He urged lawmakers to revise the bill for the next session.
“I do not believe this is the best approach to protecting the public from the real threats posed by technology,” Newsom said in a statement. “Instead, the law applies strict standards to even the most basic functions as long as a large system uses them.”
The decision to stop the law is likely to draw strong criticism from technology experts and academics who advocated for the law. Newsom, a Democrat, was under intense pressure to block the law, which had sparked a heated national debate over the regulation of artificial intelligence. A wave of lobbyists had visited his office in recent weeks: some touted the technology’s potential to provide great benefit, while others warned of its possible irreparable harm.
California was poised to become the spearhead of regulation of a technology that has exploded into public consciousness in recent years with the release of chatbots and realistic image and video generators. In the absence of federal legislation, the California legislature had taken a strong approach to limiting the technology with its proposal, and both chambers had passed the bill almost unanimously.
While lawmakers and regulators around the world are sounding alarm bells about the technology, few have taken action. Congress has held hearings but made no real legislative progress. With the AI Act, the European Union has restricted the use of risky technologies such as facial recognition software.