But California’s experience had other lessons for Britain, too.
In vetoing the bill, Newsom took aim at its narrow focus on frontier systems, arguing that it ignored the context in which they’re deployed — a criticism that has been lobbed at the U.K.’s approach, too.
Britain’s ruling Labour Party has repeatedly said its bill will place binding safety requirements on only those creating the most powerful models, rather than regulating the way the tech is used.
“If you have a very small AI that’s used only by a number of people, but that AI decides whether somebody will get a loan or not, well, that should absolutely be regulated, right? Because it has a real impact on people’s lives,” said Stefanie Valdés-Scott, head of policy and government relations in Europe for Adobe.
Valdés-Scott argued that, like the EU’s AI Act, the U.K. should target AI models’ applications in specific areas rather than the capability of top performing machines.
Meta executive and former U.K. Deputy Prime Minister Nick Clegg recently said Britain had “wasted a huge amount of time” due to the energy spent by the last government on the risks posed by AI in areas like cybersecurity and bioterrorism.