15 December 2009

Brain-computer interface: wireless speech synthesizer

A wireless brain implant – speech synthesizer has been createdMembrane based on Wired materials: Wireless Brain-to-Computer Connection Synthesizes Speech
26-year-old Erik Ramsey has been paralyzed for 10 years and cannot speak.

Now, thanks to the implant, he has learned to transmit the first simple sounds. The promising achievement was announced by a group of scientists from the American company Neural Signals, Boston University and a number of other US institutions.

Many projects in the field of brain-computer interfaces (BCI) are connected precisely with the help of the paralyzed. A particularly important area here is the communication of the patient. Previously, BCI developers focused on various variants of mental writing, but such a channel is too slow. It is much more attractive to learn how to synthesize and voice the speech spoken by the patient in his thoughts.

Such a system was built in the USA, and unlike previous experiments of this kind, the novelty is a wireless device: after surgery, the implant is completely hidden in the patient's head, no wiring carrying with it the risk of infection does not come out.

Three-dimensional tomogram of the subject's head. The arrow shows a wire penetrating into the dura mater. To the right and above is a set of electronics mounted directly on the skull, under the skin (photo by Frank H. Guenther et al.). Scientists placed a set of contacts in the precentral gyrus, more precisely, in the part of the cortex responsible for speech. From it, the wiring goes into an electronic circuit located on the skull. Here the signals are amplified and transmitted through the skin via the FM channel. The receiver is located outside and in it there is a coil that provides wireless power to the hidden circuit (a similar principle of feeding the chip and transmitting information can be found in the latest project of the retina implant).


Diagram of the brain interface – speech synthesizer (illustration by Frank H. Guenther et al.).

From the receiver, the signals are sent to an analog-to-digital converter and decoder, then the computer decodes the "pattern" of neuronal activity and controls the speech synthesizer.

What is important: the whole chain is triggered in 50 milliseconds, and this is the time for a healthy person to get a signal from the command motor cortex to the tongue and larynx, which begin to act. Thus, the key feature of the experience is feedback by ear – a person tries to speak and immediately hears sounds, correcting his attempts on the go. It looks like the process of mastering speech in a baby.

The authors of the device have done a tremendous job to learn how to isolate in a set of neural signals those that define specific formants. Let us recall that formants are acoustic components of sounds that are responsible for their character. In the case of real speech, the formants are determined by the location and current form of the language. But in a paralyzed patient, the connection between the part of the cortex that controls the speech apparatus and the executive "mechanisms" is broken. That's why scientists have built a workaround scheme.


Left: brain tomogram, view from above and from the left side.
Orange spots are the activity of neurons in the process of trying to pronounce words,
the red line is the pre–central furrow, the yellow line is the central furrow.
On the right: scanning of the same patient after implantation of electrodes (shown by arrows)
(photo by Frank H. Guenther et al.).

As a result, as tests have shown, gradual practice has significantly increased the accuracy of "hitting", that is, the reproduction of specific sounds by a paralyzed person, at the choice of experimenters. The details of the experience are described in an article in PLoS ONE (Frank H. Guenther et al., A Wireless Brain-Machine Interface for Real-Time Speech Synthesis)

However, in its current form, the system has only three wires connected to the bark at precisely calculated points. Such a modest removal of signals was enough to reliably distinguish vowel sounds in a person's thoughts. So Ramsey can't speak fully yet. But this is only the beginning of the way. For a person who was once in a car accident, and these sounds are the hope for establishing speech communication.

In the future, the creators of the device intend to increase the number of contacts to 32. Then the whole palette of sounds will become available to the patient.

In addition, so far the complex works only in laboratory conditions, since an ordinary PC is engaged in decrypting signals. But in the future, scientists intend to "tamp" everything they need into a laptop. Then the subject will be able to talk to people on the street with his synthesizer chip - for the first time in such experiments.

Portal "Eternal youth" http://vechnayamolodost.ru15.12.2009


Found a typo? Select it and press ctrl + enter Print version