Last week, EU countries provided the final seal of approval to the world’s first artificial intelligence (AI) laws.
The ambassadors of the 27 countries of the European Union unanimously approved the first-of-its-kind comprehensive rulebook for Artificial Intelligence, rubber-stamping the political agreement reached in December, which will aim to regulate systems based on the level of risk they pose.
Negotiations on the final legal text began in June, but a fierce debate over how to regulate general-purpose AI like ChatGPT and Google’s Bard chatbot threatened talks at the last minute.
EU policymakers eventually reached a political agreement on the main sticking points of the AI Act, a flagship bill to regulate AI based on its capacity to cause harm.
The complexity of the law meant its technical refinement took more than one month.
On 24 January, the Belgian presidency of the Council of EU Ministers presented the final version of the text at a technical meeting.
However, most member states expressed concerns as they did not have enough time to analyse the text comprehensively.
These reservations were finally lifted with the adoption of the AI Act from the Committee of Permanent Representatives on Friday. But the green light from EU ambassadors was not guaranteed since some European heavyweights resisted parts of the provisional deal until the very last days.
The primary opponent of the political agreement was France, which, together with Germany and Italy, asked for a lighter regulatory regime for powerful AI models, such as Open AI’s GPT-4, that support general-purpose AI systems like ChatGPT and Bard.
In February last year, Microsoft integrated ChatGPT’s capabilities into Bing, its own search engine, in a direct challenge to Google.
Earlier this month, Amazon announced a trio of generative AI tools for Alexa, enabling users to chat with historical figures, create music and take quizzes.
Google recently revealed that it would bring generative AI in search to more than 120 new countries and territories.
As well as France, Italy and Germany also requested to limit the rules in this area to codes of conduct, as they did not want to clip the wings of promising European start-ups that might challenge American companies in this space.
However, the European Parliament was united in asking for hard rules for these models, considering that it was unacceptable to carve out the most potent types of AI from the regulation while leaving all the regulatory burden on smaller actors.
Eventually, a compromise was reached on a tiered approach, with horizontal transparency rules for all models and additional obligations for compelling models deemed to entail a systemic risk.
The EU Commission also approved the creation of ‘AI factories’, designed to boost the uptake of generative AI in strategic sectors.
The AI law marks the first legally binding agreement on the regulation of the technology.
The world’s first AI conference was hosted by UK Prime Minister Rishi Sunak in November last year.
The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the “need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community”.
EU countries still have room to influence how the AI law will be implemented, as the Commission will have to issue around 20 acts of secondary legislation. The AI Office, which will oversee AI models, is also set to be significantly staffed with seconded national experts.
The European Parliament’s Internal Market and Civil Liberties Committees will adopt the AI rulebook on 13 February, followed by a plenary vote provisionally scheduled for 10-11 April. The formal adoption will then be complete with endorsement at the ministerial level.