US Researchers are a Step Closer to Mind Reading

Published August 5th, 2019 - 09:10 GMT
(Shutterstock/ File Photo)
(Shutterstock/ File Photo)

US researchers said they are now a step closer to mind reading. Led by Edward Chang from the University of California, and for the first time, they managed to decode information in an audible or spoken conversation by measuring brain activity.

Commenting on the study, Tonio Ball, German expert from the Freiburg University Hospital said: "This is a very exciting development. Chang and his colleagues are known worldwide for leading brain signals decoding."

The authors hope that the results will someday help those who have lost speech ability due to stroke, neurological degeneration, or Amyotrophic lateral sclerosis (ALS).

"Although restoring communication capabilities, albeit in a limited way, can significantly improve the quality of life, to date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of a human conversation," the researchers said.

People who lost the ability to speak have, so far, used assistant speech tools controlled by the eye movement or a keyboard, which enable them to communicate with others, but slowly. The researchers believe that any artificial speaking device should deliver ideas in the form of spoken words in real time. Aiming at developing the program, the researchers examined three people wearing a small patch of tiny electrodes placed directly on the cerebral cortex.

{"preview_thumbnail":"https://cdn.flowplayer.com/6684a05f-6468-4ecd-87d5-a748773282a3/i/v-i-5…","video_id":"5610163e-3392-4047-a614-758d6036bb2e","player_id":"8ca46225-42a2-4245-9c20-7850ae937431","provider":"flowplayer","video":"Mr Kushner Can't Sell Peace Via Cambridge Analytica"}

The electrodes recorded the activity of the brain, especially in the language and hearing center, during a repeated conversation. During the experiment, the participants heard a number of questions such as "What is your favorite musical instrument?" Then they were asked to choose one of many answers displayed on a screen and say, for example, "Electric Guitar". At the same time, the researchers used an electroencephalography to monitor the pulses of neurons that usually operate when people hear or speak.

The researchers used the measurements they made to identify the questions the participants heard or the answers they gave, using a software program they specially designed for this purpose. In 85 percent of the cases, researchers found out whether participants had heard a question or given an answer. The researchers managed to estimate the correct answer more accurately when the participants heard a familiar question. The researchers said making further improvements in the speech recognition system would ameliorate the signal decoding program.

This article has been adapted from its original source.

Subscribe

Sign up to our newsletter for exclusive updates and enhanced content