Imagine losing the ability to speak, write, or even gesture, being left and trapped inside your mind without a way to communicate.
This was the reality for Stephen Hawking, one of the most brilliant minds of our time, who needed to rely on specialized technology to express even a few words per minute after developing ALS which is a progressive neurodegenerative disease that affects the motor neurons in the brain and spinal cord causing muscle weakness or paralysis.
Stephen Hawking was still able to communicate through a technology called Assistive Context-Aware Toolkit (ACAT). This software, developed by Intel, was specifically designed for Hawking’s needs and provided features like word prediction and contextual support, allowing him to communicate with a speech synthesizer.

THE FUTURE
Millions of others, however, aren’t fortunate enough to be able to financially afford such personalized systems and ongoing support. Neurological disorders like aphasia, brain injuries, or degenerative diseases, leave people unable to share their thoughts with the world, cutting them off from relationships, careers, and basic human connection. To address this urgent problem, researchers Jerry Tang and Alex Huth at the University of Texas at Austin have developed a groundbreaking AI tool that can decode brain activity into continuous text. Unlike earlier methods, which were slow, limited, and highly individualized, their system can adapt to a new user in about an hour, simply by analyzing brain scans as the person watches silent videos.
Their work represents a major leap forward—not just for neuroscience, but for restoring voices to those who have been silenced.
This novel research at UT Austin explores a new niche within the brain-computer interface (BCI) landscape. It focuses on systems that interpret the functional intent—the desire to change, move, control, or interact with something in the environment—directly from brain activity.
What makes this work especially noteworthy is its balance between two often competing goals: achieving high-fidelity decoding while maintaining a non-invasive approach.
Invasive BCI implants can deliver faster, more precise results but require neurosurgery, extensive calibration, and ongoing clinical management. Conversely, other non-invasive methods like EEG remain more portable but lack the resolution to produce rich, continuous text.
By tapping fMRI, a brain imaging technique that, unlike regular MRI which shows structure, measures brain activity by detecting changes in blood flow linked to neural activity, data and a converter algorithm that adapts the model to new individuals in about one hour—even for those who cannot comprehend spoken language—UT Austin’s system demonstrates that meaningful, free-form semantic decoding is feasible without surgery or months of training.
They are now working with Maya Henry, an associate professor in University of Texas at Austin’s Dell Medical School and Moody College of Communication, who studies aphasia, to test whether their improved brain decoder works for people with aphasia.
“It’s been fun and rewarding to think about how to create the most useful interface and make the model-training procedure as easy as possible for the participants,” Tang said. “I’m really excited to continue exploring how our decoder can be used to help people.”
The UT Austin neurotech research is something of a microcosm in that it represents a quickly advancing field. For instance, in a recent BrainGate2 clinical trial at Stanford University, a 69-year-old man with C4 AIS C spinal cord injury successfully piloted a virtual quadcopter using only his thoughts.
Two 96-channel microelectrode arrays were implanted in the “hand knob” region of his left precentral gyrus, enabling real-time decoding of neural signals as he attempted to move individual fingers.
The participant achieved a mean target acquisition time of about two seconds and navigated through 18 virtual rings in under three minutes—over six times faster than a comparable EEG-controlled system.
Researchers suggest that this finger-level control could translate to more natural, multi-degree-of-freedom tasks, potentially facilitating improved independence and leisure activities for people living with paralysis.
Other brain-computer interfaces use invasive implants (such as electrocorticography grids) to achieve faster and more precise communication rates, sometimes reaching up to 62 words per minute, as Brown News has noted.
Yet the surgical risks, technical complexities and long-term maintenance requirements pose barriers to widespread adoption.
A core concern in any “mind-reading” technology is misuse or unauthorized data acquisition. The UT Austin brain decoder only works with cooperative participants who undergo proper training sessions.
If a person is unwilling or actively thinking of something else, the model’s output degrades into incoherence. Furthermore, if the decoder is trained on one individual’s brain signals but used on another, it produces nonsensical text. These safeguards help ensure that unauthorized mind-reading isn’t a near-term concern.
Still, that hasn’t stopped dramatic headlines from boldly, and prematurely, announcing that “mind-reading technology has arrived.”
Today, Meta, a communication services company, is announcing an important milestone in the pursuit of that fundamental question. Using magnetoencephalography (MEG), a non-invasive neuroimaging technique in which thousands of brain activity measurements are taken per second, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.
As AI tools continue to evolve, many experts are optimistic about their diagnostic potential. Olivia Hoover, current med school student at Mount Sinai and graduate in neuroscience and biology from Harvard University noted, “I believe that the most promising future for AI in medicine and the practice of neuroscience is predictive power… Early disease diagnosis means slower disease progression and sometimes even disease prevention.”
These applications may soon shift AI from a reactive tool to a proactive one, identifying neurological disorders before symptoms even appear.
This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant. This opens up an important avenue to help the scientific community understand how images are represented in the brain, and then used as foundations of human intelligence. In the long term, it may also provide a stepping stone toward non-invasive brain-computer interfaces in a clinical setting that could help people who, after suffering a brain lesion, have lost their ability to speak.
ETHICS AND LIMITATIONS
These methodologies currently require participants to spend an inordinate amount of time in fMRIs so the decoders can be trained on their specific brain data. The nature study had research subjects spend up to 16 hours in the machine listening to stories, and even after that the subjects were able to misdirect the decoder if they wanted to.
As Jerry Tang, one of the lead researchers, phrased it, at this stage these technologies aren’t all-powerful mind readers capable of deciphering our latent beliefs as much as they are “a dictionary between patterns of brain activity and descriptions of mental content.” Without a willing and active participant supplying brain activity, that dictionary is of little use.
Still, critics claim that we might lose the “last frontier of privacy” if we allow these technologies to progress without thoughtful oversight. One concern, as Olivia Hoover put it, is that “many of the patients who would benefit from these kinds of advancements have reduced capacity to provide informed consent as a result of neurological degeneration and/or other neurological deficits.” This raises critical questions about autonomy, agency, and equitable deployment. Even if you don’t subscribe to this, general skepticism is rarely a bad idea.
The “father of public relations,” Edward L. Bernays, was not only Freud’s nephew, but he actively employed psychoanalysis in his approach to advertising. Today, a range of companies hire cognitive scientists to help “optimize” product experiences and hack your attention.
History assures us that as soon as the financial calculus works out, businesses looking to make a few bucks will happily incorporate these tools into their operations.
A singular focus on privacy, however, has led us to misunderstand the full implications of these tools. Discourse has positioned this emergent class of technologies as invasive mind readers at worst and neutral translation mechanisms at best. But this picture ignores the truly porous and enmeshed nature of the human mind. We won’t appreciate the full scope of this tool’s capabilities and risks until we learn to reframe it as a part of our cognitive apparatus.
While still in development, AI-powered thought decoding technology holds great potential to aid medical treatments—provided its limitations and ethical concerns are carefully monitored.
Neurological disorders like aphasia, brain injuries, or degenerative diseases, leave people unable to share their thoughts with the world, cutting them off from relationships, careers, and basic human connection. To address this urgent problem, researchers Jerry Tang and Alex Huth at the University of Texas at Austin have developed a groundbreaking AI tool that can decode brain activity into continuous text.