He hadn’t spoken in 18 years. Find your voice with a computer

A couple of studies published in the last few days in the journal natures consolidate hopes for linguistic communication for people suffering from various types of paralysis, amyotrophic lateral sclerosis and other conditions that impair language skills. While waiting for the development of even more futuristic productions, and not without many problematic points, such as Elon Musk’s Synchron and Neuralink, scientific research continues along its path. Both systems use so-called brain-computer interfaces powered by electrodes connected to machines capable of reading brain signals and, thanks to artificial intelligence, giving patients a new voice by virtue of the association between electrical patterns and phonemes.

The first experiment is signed by the laboratory of Edward Chang, head of neurological surgery at theUniversity of California-San Francisco. With his team, which also includes experts from the Berkeley office, he has effectively reconstructed the entire speech process, rather than voicing brain signals. What’s more: in addition to the voice, a digital avatar has given back to a woman paralyzed by a brainstem stroke for over 18 years also the facial expressions. With a very thin film of 253 electrodes implanted on an area of ​​the brain dedicated to communication and connected by a cable to the computer, the scientists intercepted the electrical activity responsible for speech and the muscles of the tongue, mouth, larynx and face. By training an algorithm with those signals, in charge of associating brain activity patterns to sounds. And working on the English phoneme recognitionand not of individual words, making the work faster and at the same time more precise because it is based precisely around forty phonemes.

To this work, the Californian team added an algorithm capable of synthesize voicereplicating that of the patient based on a record of the past, e animated an avatar always thanks to the help of a British software, Speech Graphics, and algorithms related to brain activity. The mechanism has proven that it can work at the pace of 80 words per minute with a vocabulary capable of expanding to over a thousand words. “Our goal is to restore a complete, embedded way of communicating that is truly the most natural way for us to talk to each other,” Chang said in a UCSF statement. “These advances bring us much closer to making this technology a real solution for patients». The goal is to turn this experiment into a wireless device which gives more freedom to patients.

Another study, signed by the Stanford medical school on patients unable to speak due to the progression of amyotrophic lateral sclerosis, instead used four devices with 64 electrodes each, also installed in the brain areas responsible for language. The principles are the same: the system is connected to a PC and is based on automated learning of the relationship between electrical patterns and 39 English phonemes. In this case, however, the results come shared on a computer and they don’t come to life through an avatar. After four months of biweekly four-hour training with the patient, which began after implantation in late March 2022, the AI ​​was able to translate the patient’s thoughts at a rate of 62 words per minute, which then rose to 160 over the months following. However, as the amount of words processed increased, the error rate also rose, from 9 to 24% on a vocabulary of 125,000 words. But to be a technology in the prototype stage and not yet ready for any commercial development, it is very promising indeed.

More stories from Vanity Fair that may interest you

ALS, identified a protein whose presence may contribute to the development of the disease

Source: Vanity Fair

You may also like