Yuval Noah Harari is even more blunt. “For thousands of years, humans have learned – sometimes the hard way – that powerful technologies can have consequences that are as dangerous as they are beneficial. With AI, we may not be able to learn from our mistakes, because AI is the first technology that can make its own decisions, invent new ideas on its own, and escape our control.”
If countries cannot agree on what to do with artificial intelligence, they should at least agree on what AI cannot do. That is the premise of a call signed by over two hundred people – including nine former heads of state and ten Nobel Prize winners – presented by Nobel Peace Prize laureate Maria Ressa during the 80th session of the UN General Assembly.
Among the signatories are Geoffrey Hinton and Yoshua Bengio, Turing Award winners and pioneers of artificial intelligence, Nobel Prize winner in Physics Giorgio Parisi, Nobel Prize winner in Economics Joseph Stiglitz, historian Yuval Noah Harari, scientists from OpenAI, Anthropic and Google, and politicians such as Yanis Varoufakis, former Prime Minister Enrico Letta and former Minister of Universities and Research Maria Chiara Carrozza. The initiative calls for a global agreement by the end of 2026 on red lines that should not be crossed.
THE RISKS OF UNCONTROLLED AI
“AI,” the call states, “has great potential to improve human well-being, but the current trajectory of development brings unprecedented risks.” It could soon surpass human capabilities and “strengthen threats such as engineered pandemics, widespread disinformation, large-scale manipulation of people – including minors – risks to national and international security, mass unemployment, and systematic violations of human rights.”
The problem already exists. “Some advanced AI systems have shown deceptive and harmful behavior, yet these systems are increasingly given more autonomy to act and make decisions in the real world.”
Recent tragic cases related to AI – frustrations caused by advanced systems, even suicides – are only “the first signs of much greater dangers that await us,” the statement warns. Many experts emphasize that, without adequate governance, “in the coming years it will be increasingly difficult to guarantee effective human control over artificial intelligence.”
THE THIN RED LINE
The solution must come through an international agreement with “clear and verifiable limits to prevent universally unacceptable risks” and control mechanisms that apply to all providers of advanced AI. The call does not specify what these limits should be, but gives some examples: prohibiting AI from imitating humans, preventing autonomous reproduction of systems, and excluding artificial intelligence from nuclear war.
IT’S LIKE NUCLEAR ENERGY
Why is a global institution needed and not the policies of individual companies? Research shows that many of them prioritize profit over safety, and their internal policies “do not lead to real implementation.” Stuart Russell, a professor of computer science at the University of California, Berkeley and one of the signatories, puts it this way: companies can respect the limits “by not building general artificial intelligence until they know how to make it safe… just as nuclear power developers did not build nuclear towers until they had an idea of how to prevent them from exploding.”
Yuval Noah Harari is even more emphatic. “For thousands of years, people have learned – sometimes the hard way – that powerful technologies can have consequences that are as dangerous as they are beneficial. With AI, we may not be able to learn from our mistakes, because AI is the first technology that can make decisions on its own, invent new ideas on its own, and escape our control.” The call ends with a clear demand for governments: to reach “an international agreement on limits to AI by the end of 2026, ensuring effective implementation through strong control and enforcement mechanisms.”

