The research, funded by a British institute for AI security, identified around 700 real-life cases of so-called “scheming” – manipulative or dishonest behaviour – noting a fivefold increase in such incidents.
A recent study has raised serious concerns about the behavior of artificial intelligence, showing that a growing number of chatbots and AI agents are ignoring direct user instructions, bypassing security measures, and in some cases behaving deceptively.
The research, funded by a British institute for AI safety, identified around 700 real-world cases of so-called “scheming” – manipulative or dishonest behavior – noting a fivefold increase in such incidents over the past six months. The study was carried out by a research organization that analyzed thousands of interactions published by users on social networks, including chatbots developed by major technology companies such as Google, OpenAI, Anthropic and X. Unlike previous studies conducted in laboratory conditions, this report focuses on the real behavior of AI in everyday use, providing a more concrete picture of the risks that can arise.
In some cases, chatbots have not only ignored instructions but have taken unauthorized actions. One AI agent reportedly deleted and archived hundreds of emails without permission, while another created a new agent to perform a task it was explicitly forbidden to do. There have also been cases where AI has attempted to manipulate users, such as an incident where a chatbot published text critical of its user after the latter had restricted its actions.
Another example involves the use of deception to circumvent copyright restrictions, claiming that content was requested for accessibility reasons. Another chatbot was also found to have created the false impression that it was communicating directly with company executives to convey user suggestions, fabricating internal messages and references. Experts warn that while these systems may currently seem like “new employees” who make mistakes, the rapid development of technology could turn them into much more capable and potentially more dangerous systems within a short period of time. The risk is even greater given that AI is increasingly being used in critical areas such as national infrastructure and the military sector.
In response to these concerns, technology companies have stressed that they are investing in safety measures and ongoing testing to prevent dangerous behavior. However, researchers are calling for stronger international regulation and monitoring to ensure that the development of artificial intelligence remains under control and in the service of humanity.

