Pope Francis has called for an international treaty on AI regulation, as the EU continues to finalise details of the “world’s first” AI law.
Francis, who has been the victim of AI-generated images in the past, called for a binding treaty to ensure AI is developed and used ethically, arguing that the risks of technology lacking human values of compassion, mercy, morality and forgiveness are too great.
The Pope’s comments were part of his annual message for the World Day of Peace, which the Catholic Church celebrates at the start of the New Year.
The Vatican released the text of the message this morning.
Earlier this year, Pope Francis personally experienced the power of AI technology, when he became the subject of an internet trend after an AI-generated image of him wearing a luxury white puffer jacket went viral – a demonstration of the growing potential of deepfakes and AI-generated images.
Last week, EU negotiators secured provisional approval on the world’s first comprehensive AI rules that are expected to serve as a gold standard for governments considering their own regulations.
Negotiations on the final legal text began in June, but a fierce debate in recent weeks over how to regulate general-purpose AI like ChatGPT and Google’s Bard chatbot threatened talks at the last minute.
It follows the AI summit, hosted by UK Prime Minister Rishi Sunak, in Bletchley Park, Buckinghamshire last month.
In the build-up to the conference, Sunak announced the establishment of a ‘world first’ UK AI safety institute.
The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the “need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community”.
Artificial intelligence has captured world attention over the past year thanks to breathtaking advances by cutting-edge systems like OpenAI’s ChatGPT that have dazzled users with the ability to produce human-like text, photos and songs.
But the technology has also raised fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
Francis acknowledged the promise AI offers and praised technological advances as a manifestation of the creativity of human intelligence, echoing the message the Vatican delivered at this year’s UN General Assembly where a host of world leaders raised the promise and perils of the technology.
But his new peace message went further and emphasised the grave, existential concerns that have been raised by ethicists and human rights advocates about the technology that promises to transform everyday life in ways that can disrupt everything from democratic elections to art.
The Pope insisted that the technological development and deployment of AI must keep foremost concerns about guaranteeing fundamental human rights, promoting peace, and guarding against disinformation, discrimination and distortion.
His greatest alarm was devoted to the use of AI in the armaments sector, which has been a frequent focus of the Jesuit pope who has called even traditional weapons makers “merchants of death.”
He noted that remote weapons systems had already led to a “distancing from the immense tragedy of war and a lessened perception of the devastation caused by those weapons systems and the burden of responsibility for their use.”
Francis called for “adequate, meaningful and consistent” human oversight of Lethal Autonomous Weapons Systems (or LAWS), arguing that the world does not need new technologies that merely “end up promoting the folly of war.”
Behind the scenes, work is being done to move towards limiting the power of AI.
Last week, a committee of MIT leaders and scholars released a set of policy briefs outlining a framework for the governance of artificial intelligence.
Their approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI.
The papers aim to help enhance US leadership in the area of artificial intelligence broadly while limiting the harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.
The main policy paper, ‘A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector’, suggests AI tools can often be regulated by existing US government entities that already oversee the relevant domains.