| Welcome to Global Village Space

Friday, April 12, 2024

Mind reading with Artificial Intelligence

Artificial Intelligence strikes again with the ability to read what people are thinking, but this raises concern over mental privacy.

A recent study has shown that scientists have found a way to use brain scans and Artificial Intelligence (AI) modelling to transcribe the “gist” of what people are thinking, which has been described as a step towards mind reading. While the main goal of the language decoder is to help people who have lost the ability to communicate, the technology has raised questions about “mental privacy”. The scientists acknowledged this concern and ran tests showing that their decoder could not be used on anyone who had not allowed it to be trained on their brain activity over long hours inside a functional magnetic resonance imaging (fMRI) scanner.

The Science Behind Brain-Computer Interfaces

Previous research has shown that a brain implant can enable people who can no longer speak or type to spell out words or even sentences. These “brain-computer interfaces” focus on the part of the brain that controls the mouth when it tries to form words. Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of the study, said that his team’s language decoder “works at a very different level”. “Our system really works at the level of ideas, of semantics, of meaning,” Huth said. It is the first system to be able to reconstruct continuous language without an invasive brain implant, according to the study in the journal Nature Neuroscience.

Read More: The accidental discovery: How AI found a new exoplanet

How the Study Worked

For the study, three people spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, mostly podcasts such as the New York Times’ Modern Love. This allowed the researchers to map out how words, phrases, and meanings prompted responses in the regions of the brain known to process language. They fed this data into a neural network language model that uses GPT-1, the predecessor of the AI technology later deployed in the hugely popular ChatGPT. The model was trained to predict how each person’s brain would respond to perceived speech, then narrow down the options until it found the closest response.

The Accuracy of the Decoder

To test the model’s accuracy, each participant then listened to a new story in the fMRI machine. The study’s first author Jerry Tang said the decoder could “recover the gist of what the user was hearing”. For example, when the participant heard the phrase “I don’t have my driver’s licence yet”, the model came back with “she has not even started to learn to drive yet”. The decoder struggled with personal pronouns such as “I” or “she,” the researchers admitted. But even when the participants thought up their own stories — or viewed silent movies — the decoder was still able to grasp the “gist,” they said. This showed that “we are decoding something that is deeper than language, then converting it into language,” Huth said.

What Does the Future Hold?

While the technology shows promise for those who have lost the ability to communicate, it also raises ethical concerns about privacy and consent. The researchers made it clear that the decoder could not be used on anyone who had not given consent to have their brain activity scanned for long periods of time. However, there are still questions about what could happen if this technology were to fall into the wrong hands. It is important to consider these implications before moving forward with further development of the technology.

Read More: The impact of AI on the artistic landscape

This study is another example of how beneficial Artificial Intelligence can be, especially if used for the right purposes. AI has had a significant development from just responding to queries to being able to read and translate brain waves. While the main goal of this study is to help those who have lost the ability to communicate, it also raises ethical concerns about privacy and consent. And these implications might always be a concern even after moving forward with further development of the technology.