The electrical impulses that travel through the human brain have been incredibly complex to understand for decades. Now, thanks to advances in artificial intelligence, scientists are able to translate neural signals into understandable words and sentences. A 52-year-old woman, paralyzed by a stroke 19 years ago and unable to speak clearly, was able to communicate through a computer screen. Identified as participant T16, she underwent surgical implantation of a small electrode array in the frontal lobe of her brain. The study was conducted by researchers at Stanford University in California, and also included three patients with amyotrophic lateral sclerosis (ALS), a neurodegenerative disease that leads to progressive paralysis.
The system, based on artificial intelligence algorithms, decoded the signals from neurons as the patient imagined she was pronouncing words. These signals were translated into text in real time on the screen. The results, published in August 2025, are considered a significant step towards what is often described as “mind reading.”
Efforts to create brain-computer interfaces (BCIs) date back decades. As early as 1969, American neuroscientist Eberhard Fetz demonstrated that the activity of a single neuron could control an external device. However, only in recent years, thanks to machine learning, has it become possible to translate the complex signals associated with speech. In 2021, researchers at Stanford University managed to enable a tetraplegic man to type 18 words per minute by imagining that he was drawing letters in the air. In 2024, a team at the University of California, Davis managed to translate the speech attempts of an ALS patient into text with 97.5% accuracy and a speed of about 32 words per minute.
These technologies rely on microelectrode arrays placed in the motor cortex, the area of the brain responsible for muscle movement. Algorithms are trained to recognize patterns of neural activity that correspond to phonemes, the smallest units of language. One of the latest innovations is the attempt to decipher “inner speech,” words that are formed in the mind without being physically uttered.
The researchers tested whether the system could recognize spontaneous thoughts. In tasks where participants imagined certain sentences, an accuracy of up to 74% was achieved in real time. However, in more open scenarios, such as thinking about a favorite quote from a movie, the results remained unclear. According to the researchers, the neural patterns of inner speech are similar to those of an attempt to speak, but weaker in intensity. In 2025, the team from the University of California, Davis was able to decipher not only words, but also nonverbal elements such as intonation, pitch and rhythm. A patient with a severe speech disorder was able to modulate his voice and even sing simple melodies. About 60% of the words were rated as understandable by the testers.
In parallel, researchers in Japan have developed techniques to recreate images from brain activity through functional magnetic resonance imaging (fMRI) scans. These experiments use generation algorithms such as Stable Diffusion, trained on data from the University of Minnesota. In many cases, the artificial intelligence was able to generate images similar to those that the participants had seen. In 2025, the reconstruction of music through the analysis of brain scans was also tested, with the help of an algorithm developed by the technology company Google.
Although the quality remains limited, the researchers were able to identify the basic character and category of the music heard. Experts predict that in the coming years, brain-computer interface technologies will move from the lab to wider use. Companies such as Neuralink, founded by Elon Musk, are developing brain chips for clinical and potentially commercial applications. However, these advances also raise ethical issues, including mental privacy and the protection of human rights, especially if the technology were to enable direct brain-to-brain communication. Despite the current technical limitations, for patients who cannot communicate otherwise, these developments represent a historic step, a real opportunity to give voice to their thoughts and open a new chapter in understanding the human brain.

