Artificial Intelligence: Huge Leaps in the Brain-Computer Interface

Facebook (as it was known before its entry into the Metaverse) grabbed headlines when it began funding brain-computer interface technology, looking for a way to let users create text just by thinking about it. Facebook wanted to create a new way of interacting with technology – a system in which a user could simply imagine they were speaking, and the device would convert those electrical pulses into text.

Facebook had hoped to create a wearable device that could pick up language, and translate electrical signals from the brain into digital information. Despite the intriguing proposition of the social media giant potentially developing a consumer-first interface to use language, the company decided to pull out of the project last year, launching its existing language research into open access, and focus on motion-capturing interfaces related to neural signals rather than language.

While the American giant has withdrawn from the market, a number of laboratories are moving forward and making breakthroughs in converting language to text or speech. These projects collect data directly from the source, using electrodes that come in direct contact with the surface of the brain. Indeed, unlike systems based on wearable devices, brain-computer interfaces that use implanted electrodes offer a better signal-to-noise ratio and allow for more detailed and specific recordings of brain activity.

The first systems that were developed

Last year, UCSF, Facebook’s research partner, announced that Chang Lab, named after neurosurgeon Edward Chang, who heads the facility, had created a functional thought-to-text interface as part of a research article. The system uses sensors housed in a polymer sheet that, when placed on the surface of the brain, can pick up nerve signals from the user. This information is then decoded by machine learning systems to create the words the user wants to speak.

The first user of the system was a person who had suffered a stroke, which left him with extremely limited movement of the head, neck and extremities and the inability to speak. Since the stroke, he has had to communicate by moving his head, using a pointer attached to a baseball cap to touch the letters on the screen.

Usually, signals from the brain travel to the muscles of speech via nerves — think of nerves as electrical wires in the brain. In the case of the experiment participant, the wires were effectively cut between the brain and the vocal muscles. When he tried to speak, signals formed, but she could not reach her destination. The interface picks up these signals directly from the brain’s vocal cortex, analyzes them to see which speech-related muscles the participant was trying to move, uses them to find the words they intended to say, and converts those muscle movements into electronic speech. As a result, the participant is able to communicate faster and more naturally than he has done in the 15 years since his stroke.

Accuracy increase

The test participant can speak one of the 50 words that the system can recognize. The words were chosen by the UCSF researchers because they were commonplace, related to care, or just words the participant wished they had said, such as family, goodness or water.

To create a working interface, the system must be trained to recognize cues associated with words. To do this, the participant had to practice pronouncing each word approximately 200 times in order to create a data set of sufficient size for the interface software to learn. The signals were sampled from a 128-channel array in his brain and interpreted by an artificial neural network, which uses nonlinear models capable of learning complex patterns of brain activity and correlating them with speech.

When the user tries to literally pronounce a sentence, the linguistic model predicts the probability that he is trying to say each of the 50 words and how likely those words are to be combined into a sentence, in order to give the final result in real time. Thus, the system was able to decode the participant’s speech at a rate of up to 18 words per minute and with an accuracy of 93%.

The UCSF team now hopes to expand use of the beta system to new participants. According to David Moses, a postdoctoral engineer in Chang’s lab and one of the lead authors of the research project, many people are asking to participate in UCSF’s research on thought-to-speech interfaces. “You need the right person. There are a lot of criteria for inclusion, not only regarding the type of disability the person has, but also about their general health and other factors. It is also important that they understand that this is a research study and there is no guarantee that technology will benefit them significantly. Live, at least in the near future. You need a certain type of person,” he explains in an interview with ZDNet.

Amazing results

Most of the arrays used in human experiments for gas interfaces — where the electrodes are placed directly on the surface of the brain — are made by one company, Blackrock Neurotech.

Blackrock Neurotech is also working on language applications for brain-machine interfaces. Instead of using signals sent to speech muscles, as in the case of the UCSF experiment, the company created a system based on imaginary handwriting: mentally imagine yourself typing an “A”, and the system converts it into written text using an algorithm developed by Stanford University. The system currently operates at around 90 characters per minute, and the company hopes to one day be able to reach 200 characters, the speed at which the average person types by hand.

This system, perhaps one of the earliest commercial systems, is likely to be used by people with conditions such as amyotrophic lateral sclerosis (ALS), an incurable disease also known as Lou Gerig or motor neuron disease. In an advanced stage, ALS can cause shutting syndrome, in which a person cannot use any of their muscles to move, speak, swallow, or even blink their eyes. At the same time, his mind remains as active as ever. Interfaces like the one at Blackrock Neurotech are meant to allow people with ALS or lock-down syndrome, which some strokes can also cause, to continue to communicate.

“We’ve had cases where the neural interface spelled out a word that the autocorrecter continued to correct, and participants reported that it was a word they had formed from scratch when they started dating their partner. “The neural interface was,” said Marcus Gerhard, CEO and co-founder of Blackrock Neurotech, in an interview with ZDNet. Able to find a word that only two people in the world know.” The system currently works with an accuracy of 94%, which increases to 99% once autocorrect is applied.

Cost Issue (Critical)

Although still in a relatively early stage of development, brain-machine interfaces have the potential to improve the quality of life of patients with conditions that currently prevent them from speaking. Although the technologies behind these interfaces have made great strides in recent years and are faster at translating speech into words on a screen, there is still much work to be done before the systems can be deployed to the general population.

It goes without saying that given the novelty of brain-machine interface systems, privacy and data ownership concerns must be addressed before large-scale commercialization. Since this type of interface is very modern, it is also necessary to learn more about its long-term use. There is a practical question of how long the electrodes will remain functioning in the brain’s unfriendly environment for the electrodes: Blackrock Neurotech arrays have been in place for seven years, and the company believes ten years is possible.

There’s also the issue of long-term support, according to Mariska Vanstensel, associate professor at the Brain Center at UMC Utrecht. Regular adjustments to parameters will be necessary to improve systems based on disease progression or other situations that may affect brain activity, as well as user preferences. You may also need to replace or update hardware. Currently, there is no agreed upon framework for defining who should manage long-term interface support.

Perhaps the most pressing challenge for technologies like those from Blackrock Neurotech and UCSF is that they target relatively small groups of patients. At the same time, the systems themselves are specialized and expensive, and their installation requires specialized and expensive neurosurgery. If brain-machine interfaces dedicated to language succeed in commercializing it, its cost could prevent it from reaching those who need it most.

Source: ZDNet.com

Leave a Comment

Your email address will not be published.