News on Monday dropped that the US, UK, and 16 other countries have agreed to develop safe AI for humankind. The AI systems of the present should be “secure by design,” and that’s a great step forward, particularly after the developments at OpenAI last week. You might not realize it, but the recent fight between Sam Altman and the board overseeing all ChatGPT activities might be incredibly important.
Like any Cyber Monday deal, however, we could do even better. This is only an initial international agreement that’s missing most countries worldwide, including the ones you might fear won’t design AI that’s secure.
Moreover, this first treaty is sort of a gentlemen’s agreement, though it is a written one that’s 20 pages long. That’s because it contains no provisions about what happens to the companies that fail to adhere to the safe AI development principles detailed in the document.
What countries signed the AI deal
According to a Reuters report on Monday, a senior US official described the AI deal as “the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are ‘secure by design.’”
The document was released on Sunday, with 18 countries inking the deal. Aside from the US and UK, the list includes Australia, Canada, Chile, Czech Republic, Estonia, France, Germany, Israel, Italy, Japan, Korea, New Zealand, Nigeria, Norway, Poland, and Singapore.
The list is somewhat underwhelming. Though it contains various EU countries, the entire European Union block is not among the signatories. The report does say that the EU is ahead of the US when it comes to AI regulations. The region is already developing AI rules. Moreover, France, Germany, and Italy also reached an agreement on how AI should be regulated.
Unsurprisingly, on the other hand, countries with more authoritative regimes aren’t on the list, including China, Iran, North Korea, and Russia.
I wouldn’t be surprised if more countries and terriories want to sign this particular agreement or future ones. As I’ve discovered recently, rogue AI can become a huge problem for the entire world, no matter who creates bad artificial intelligence. This isn’t just fearmongering; it’s a reality that most generative AI software users might fail to completely comprehend. And I include yours truly on that list.
Programs like ChatGPT, Google Bard, Claude, and other advanced AI are only good if they’re safe.
No real stakes for now
That said, the document doesn’t seem to have real stakes. The signatories agree that companies need to develop safe AI. To protect customers and prevent the misuse of such tools. It also covers data protection and vetting software suppliers. The agreement also tackles questions about keeping AI tech safe from hackers. It recommends the release of AI models only after appropriate security testing.
But, as Reuters observes, the agreement is non-binding and carries “mostly general recommendations” about safe AI. Also, the deal does not “tackle thorny questions around the appropriate uses of AI, or how the data that feeds these models is gathered.”
These are important matters that should be addressed. Privacy and copyright issues were obvious as soon as ChatGPT came out.
Also, future deals should have ways to enforce rules and punish offending companies. With AI, you probably don’t get to make more than one big mistake.
Still, it’s good news to see the world starting to come together on AI. Hopefully, more will come of it to guide us to a future where safe AI helps out in every aspect of life. Rather than bad AI plotting to take over the world.
The 20-page agreement is available at this link.