Californian researchers have developed brain-computer interfaces (BCIs) that translate brain signals into words, enabling faster speech for those who have lost the ability to speak. Using a combination of brain implants and algorithms, the systems achieved between 62 and 78 words per minute. Despite current limitations, such as invasive surgery and high error rates, advances such as wireless BCIs and better artificial intelligence promise improved communication solutions for affected people, according to Big Think.
California researchers have presented two new brain-computer interfaces (BCIs) that translate brain signals into words. In two people who can no longer speak for themselves, the devices have enabled “talk” at speeds up to four times faster than any previous device.
“It is now possible to imagine a future where we can render fluent conversation to a person with paralysis, allowing them to freely say whatever they want to say with enough precision to be reliably understood,” he said. Francis Willett, who co-authored a study on one of the devices.
The Challenge: Brain injuries, neurological disorders, strokes and other health problems have deprived countless people of the ability to speak – even if they understand the language and know what they want to say, their bodies just don’t cooperate.
Pat Bennett is one of them.
The 68-year-old was diagnosed with amyotrophic lateral sclerosis (ALS) about a decade ago and, as the disease progressed, she lost the ability to move the muscles necessary to produce decipherable speech. She can still type with her fingers, but the process is becoming more and more difficult.
Ann Johnson is the high.
In 2005, she suffered a stroke that left her completely paralyzed. Now, at the age of 47, she can make small movements of her head and parts of her face, but she still cannot speak. She uses an assistive device to syllabify the words she wants to say at a speed of only 14 words per minute (wpm), but that is much slower than the average speaking speed of 160 wpm.
What is new? Bennett and Johnson have now regained their ability to “speak” at average speeds of 60 to 80 wpm thanks to new speech BCIs, which use a combination of brain implants and trained computer algorithms to translate thoughts into text.
The previous record for speaking with a BCI was only 18 wpm.
Bennett’s system was developed by a team at Stanford University, while Johnson’s system was developed by researchers at UC San Francisco. The Stanford study was previously shared on the bioRXiv server, but both groups published papers about their devices in the journal Nature on August 23.
How it works: In 2022, Bennett underwent surgery to implant four sensors in the outermost layer of the brain, in areas known to play a role in speech. Gold wires leading from the brain implants connect to a port in her skull.
After connecting the port to a computer, she spent about 100 hours – over 25 sessions – trying to repeat sentences from a large data set.
Now, when Bennett tries to speak, the system sends its best estimate of the phonemes it can think of to a language model, which predicts the words and displays them on a screen.
“This system is trained to know which words should come before others and which phonemes form which words,” Willett said. “If some phonemes have been misinterpreted, he can still make a good guess.”
Using the system, Bennett can “speak” at an average speed of 62 words per minute. When limited to a vocabulary of only 50 words, the speech BCI has an error rate of 9.1%. When the vocabulary is expanded to 125,000 words, the rate is 23.8%, which means that approximately one in four words is wrong.
This percentage seems high, but it is also a huge step forward. The BCI that previously held the speed record had an average error rate of 25% on a vocabulary of 50 words.
Meanwhile, Johnson’s team placed just a single sensor on the surface of her brain, a less invasive procedure but containing about the same total number of electrodes as the four sensors used by the Stanford group.
She then spent weeks repeating phrases from a 1,024-word data set to train her algorithm to predict which of the 39 English phonemes she was trying to pronounce. At the end of training, the system was able to predict the words in the data set it was trying to speak at an average speed of 78 wpm, with an error rate of 25.5%.
However, instead of just displaying words as text on a screen, the UCSF team took the speech BCI a step further by pairing it with a digital avatar of Johnson’s face and a trained synthetic voice. to sound exactly like her.
To train the avatar to make the right facial expressions for Johnson, the researchers recorded her brain activity as she tried to make the expressions over and over again.
To recreate Johnson’s voice, they fed an AI a recording of her speech at her wedding, which took place just months before her life-changing stroke.
“When Ann first used this system to speak and move the avatar face in tandem, I knew it was going to be something that would have a real impact,” said researcher Kaylo Littlejohn.
Looking ahead: Both Bennett and Johnson had to undergo risky brain surgery to have their sensors implanted, and unfortunately, over time, it’s possible that the scar tissue that forms around the brain implants begins to interfere with the signals cerebral.
Also, the error rates are still relatively high, and both systems are usable only in the laboratory and require many hours of training.
But wireless BCIs and brain implants that don’t scar or require invasive procedures are being developed, and the artificial intelligence that powers speech BCIs is steadily improving, which could lead to lower error rates and faster training times.
In the future, it is not hard to imagine that these technologies, combined with digital avatars and improved synthetic voices, could help countless people communicate effortlessly and expressively, as they did before disease or injury robbed them of their voice.
2023-09-04 18:07:07
#paralyzed #woman #speak #brainavatar #interface #Aktual24