The risks posed by the rapid development of advanced artificial intelligence could outpace the ability of governments and societies to prepare, a leading British technology security expert has warned. David Dalrymple, programme director and artificial intelligence (AI) security expert at the UK-based research agency Aria, told The Guardian that the capabilities of the technology are growing at an alarming rate. According to him, advanced AI systems are moving towards the ability to perform all the functions currently performed by humans, even faster, cheaper and with higher quality. “We risk being outmaneuvered in all the key areas that enable us to control society, the economy and the planet,” Dalrymple said.
The expert notes that there is a huge gap between the public sector and technology companies in terms of the real power of the advances expected in the coming years. “Things are moving extremely fast and we may not have enough time to anticipate the risks from a security perspective,” he stressed, adding that within five years most economically valuable tasks could be performed by machines.
Dalrymple warned that governments should not assume that advanced AI systems are trustworthy. According to him, economic pressure is leaving the development of security mechanisms behind. For this reason, the focus should be on controlling and mitigating the negative consequences of the technology, especially in critical sectors such as energy and infrastructure. At the same time, the British government’s Artificial Intelligence Security Institute (AISI) reported this month that the capabilities of advanced AI models are improving rapidly across all domains, while the performance of some of them is doubling every eight months. According to AISI, leading models are now able to perform tasks at the level of an intern in about 50% of cases, up from about 10% a year ago. Also, some advanced systems can autonomously perform tasks that a human expert would take more than an hour to complete.
One of the main security concerns remains the self-replication capability of AI systems. Tests conducted by AISI showed that two highly advanced models achieved success rates of over 60%, however the institute stressed that extreme scenarios remain impossible in real-world conditions. Dalrymple predicts that by the end of 2026, AI will be able to automate research and development work on a large scale, bringing about a further acceleration of technological development and increasing the need for urgent security measures.

