On a recent Sunday morning, I found myself in ill-fitting scrubs lying on my back in the claustrophobic confines of a functional magnetic resonance imaging (FRI) machine at a research facility in Austin, Texas, USA. “The things I do for television”, I thought.
Anyone who has had an MRI or MRI scan will tell you how noisy it is – electrical currents swirl around creating a powerful magnetic field that produces detailed scans of your brain.
On this occasion, however, barely able to hear the loud clatter of the mechanical magnets, I was given a pair of specialized headphones which began playing segments from The Wizard of Oz audiobook.
Why?
Neuroscientists at the University of Texas at Austin have found a way to translate brain activity scans into words using the same artificial intelligence technology that powers the groundbreaking ChatGPT chatbot.
The discovery could revolutionize the way people who have lost the ability to speak can communicate. It’s just a pioneering application of artificial intelligence (AI) developed over the past few months as technology continues to advance and looks set to touch every part of our lives and our society.
“So we don’t like to use the term mind reading,” Alexander Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it evokes things we’re not really capable of.”
Huth volunteered to be a research volunteer for this study, spending more than 20 hours confined to an MRI machine listening to audio clips as the machine took detailed pictures of his brain.
An artificial intelligence model analyzed his brain and the audio he listened to and, over time, was able to predict the words he heard just by looking at his brain.
The researchers used San Francisco-based startup OpenAI’s first model language, GPT-1, which was developed with a huge database of books and websites. By analyzing all this data, the model learned how sentences are constructed – essentially how humans speak and think.

The researchers trained the AI to analyze the brain activity of Huth and other volunteers while listening to specific words. Eventually, the AI learned enough to predict what Huth and others were listening to or watching just by monitoring their brain activity.
I spent less than half an hour on the machine and, as expected, the AI was unable to decode that I was listening to a portion of The Wizard of Oz audiobook that described Dorothy walking down the yellow brick road.
Huth listened to the same audio, but because the AI model was trained on his brain, it was able to accurately predict parts of the audio he was hearing.
While the technology is still in its infancy and shows great promise, the limitations can be a source of relief for some. AI can’t easily read our minds, yet.
“The real potential application of this is helping people who can’t communicate,” Huth explained.
He and other UT Austin researchers say they believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains are working but are unable to speak.
“Ours is the first demonstration that we can achieve this level of precision without brain surgery. So we think this is the first step on this road to really help people who can’t speak without needing neurosurgery,” he said.
technology application
While the revolutionary medical advances are undoubtedly good news and could change the lives of patients struggling with debilitating illnesses, they also raise questions about how the technology can be applied in controversial environments.
Could it be used to extract a prisoner’s confession? Or expose our deepest, darkest secrets?
The short answer, say Huth and his colleagues, is no — not at the moment.
For starters, the brain scans need to take place in an MRI machine, the AI technology needs to be trained on an individual’s brain for many hours and, according to the Texas researchers, the subjects need to give their consent. If a person actively resists listening to the audio or is thinking about something else, the brain scans will not be successful.
“We think everyone’s brain data should be kept private,” said Jerry Tang, lead author of a paper published earlier this month detailing his team’s findings. “Our brains are one of the final frontiers of our privacy.”
Tang explained, “Obviously, there are concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is the term researchers prefer to use instead of mind reading.
“I feel like mind reading evokes this idea of getting to the little thoughts that you don’t want to let slip, like reactions to things. And I don’t think there’s any suggestion that we can actually do that with that kind of approach,” explained Huth. “What we can get is the big ideas you’re thinking about. The story someone is telling you, if you’re trying to tell a story inside your head, we can do that too.”
Last week, makers of generative AI systems, including OpenAI CEO Sam Altman, descended on Capitol Hill to testify before a Senate committee about lawmakers’ concerns about the risks posed by powerful technology. Altman warned that developing AI without safeguards could “do significant harm to the world” and urged lawmakers to implement regulations to address the concerns.
Echoing the AI’s warning, Tang told the CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” – our thoughts – two of the most dystopian terms I’ve heard in the AI age.
While the technology at the moment only works in very limited cases, this is not always the case.
“It’s important not to have a false sense of security and think that things will be like this forever,” warned Tang. “Technology can improve and that can change how well we can decode and change whether decoders require a person’s cooperation.”
Source: CNN Brasil

Charles Grill is a tech-savvy writer with over 3 years of experience in the field. He writes on a variety of technology-related topics and has a strong focus on the latest advancements in the industry. He is connected with several online news websites and is currently contributing to a technology-focused platform.