17 September 2019

Not a head, but a computer

What are neural interfaces and what awaits them in the future

Maria Ermolova, N+1

Neurointerfaces – technologies that allow you to connect the brain and the computer – are gradually becoming routine: we have already seen how, with the help of mental orders, a person can control a prosthesis or type text on a computer. Does this mean that the promises of science fiction writers who wrote about full-fledged mind reading using a computer or even about transferring human consciousness to a computer will soon become reality? In 2019, the science fiction short story contest "Future Time", organized by the Sistema Charitable Foundation, is dedicated to the same topic – "Augmented Personality". Together with the organizers of the contest, the editors of N + 1 figured out what modern neural interfaces are capable of and whether we can really create a full-fledged brain-computer connection. And Alexander Kaplan, the founder of the first interface laboratory in Russia at Lomonosov Moscow State University, helped us in this.

"Hack" the body

Neil Harbisson has congenital achromatopsia, which deprived him of color vision. The Briton, having decided to deceive nature, implanted a special camera that converts color into sound information and sends it to the inner ear. Neil considers himself the first cyborg officially recognized by the state.

In 2012, in the United States, Andrew Schwartz from the University of Pittsburgh demonstrated a paralyzed 53-year-old patient who used electrodes implanted in the brain to send signals to a robot. She learned to control the robot so much that she was able to serve herself a bar of chocolate.

In 2016, in the same laboratory, a 28-year-old patient with a severe spinal injury extended an artificial hand controlled from the brain to Barack Obama who visited him. Sensors on the hand allowed the patient to feel the handshake of the 44th President of the United States.

Modern biotechnologies enable people to "crack" the limitations of their body, creating a symbiosis between the human brain and the computer. It seems that everything is going to the fact that bioengineering will soon become part of everyday life.

What will happen next? Philosopher and futurist Max Mohr, a follower of the idea of transhumanism, since the end of the last century, has been developing the idea of man's transition to a new stage of evolution using, among other things, computer technology. In the literature and cinema of the last two centuries, a similar game of futuristic imagination has been slipping through.

In the world of William Gibbson's science fiction novel "Neuromancer", published in 1984, implants have been developed that allow their wearer to connect to the Internet, expand intellectual capabilities and relive memories. Masamune Shiro, the author of the cult Japanese sci-fi manga "Ghost in Armor", recently filmed in the USA, describes a future in which any organ can be replaced with bionics, up to the complete transfer of consciousness into the body of a robot.

How far can neural interfaces go in a world where, on the one hand, ignorance multiplies fantasies, and on the other, fantasies often turn out to be providence?

Potential difference

The central nervous system (CNS) is a complex communication network. There are more than 80 billion neurons in the brain alone, and there are trillions of connections between them. Every millisecond, the distributions of positively and negatively charged ions change inside and outside any nerve cell, determining how and when it will respond to a new signal. At rest, the neuron has a negative potential relative to the environment (on average -70 millivolts), or "resting potential". In other words, it is polarized. If a neuron receives an electrical signal from another neuron, then in order for it to be transmitted further, positive ions must get inside the nerve cell. Depolarization occurs. When depolarization reaches a threshold value (approximately -55 millivolts, but this value may vary), the cell is excited and lets in more and more positively charged ions, thereby creating a positive potential, or "action potential".

Mind-Controlled1.jpg

Action potential (Studopedia.ru )

Further, the action potential along the axon (cell communication channel) is transmitted to the dendrite channel–the recipient of the next cell. However, the axon and the dendrite are not directly connected, and an electrical impulse cannot simply pass from one to the other. The place of contact between them is called the synapse. Synapses produce, transmit and receive neurotransmitters – chemical compounds that directly "forward" the signal from the axon of one cell to the dendrite of another.

When the impulse reaches the end of the axon, it releases neurotransmitters into the synaptic cleft, overcoming the space between the cells and attaching to the end of the dendrite. They force the dendrite to let in positively charged ions, move from the resting potential to the action potential and transmit a signal to the cell body.

The type of neurotransmitter also determines which signal will be sent next. For example, glutamate leads to the excitation of neurons, gamma-aminobutyric acid (GABA) is the most important inhibitory mediator, and acetylcholine is able to do both, depending on the situation.

This is how a neuron looks schematically:

Mind-Controlled2.jpg

Neuron Diagram (Wikimedia commons)

And this is how it looks in reality:

Mind-Controlled3.jpg

Neuron under the microscope (Wikimedia commons)

Moreover, the response of the recipient cell depends on the number and rhythm of incoming impulses, information coming from other cells, as well as from the brain area from which the signal was sent. Various auxiliary cells, the endocrine and immune systems, the external environment and previous experience – all this determines the state of the central nervous system at the moment and thereby affects human behavior.

And although, as we understand, the central nervous system is not a set of "wires", the work of neurointerfaces is based precisely on the electrical activity of the nervous system.

A positive leap

The main task of the neurointerface is to decode the electrical signal coming from the brain. The program has a set of "patterns" or "events" consisting of various signal characteristics: oscillation frequencies, spikes (peaks of activity), locations on the cortex, and so on. The program analyzes the incoming data and tries to detect these events in them.

The commands sent further depend on the result obtained, as well as on the functionality of the system as a whole.

An example of such a pattern is the P300 evoked potential (Positive 300), often used for so–called spellers - typing mechanisms using brain signals.

When a person sees the symbol he needs on the screen, after 300 milliseconds, a positive jump in electrical potential can be detected on the recording of brain activity. Having detected the P300, the system sends a command to print the corresponding symbol.

At the same time, the algorithm cannot detect the potential from one time due to the noise level of the signal by random electrical activity. Therefore, the symbol must be presented several times, and the data obtained must be averaged.

In addition to a momentary change in potential, the neurointerface can look for changes in the rhythmic (i.e. oscillatory) activity of the brain caused by a certain event. When a sufficiently large group of neurons enters the synchronous rhythm of activity fluctuations, this can be detected on the signal spectrogram in the form of ERS (event-related synchronization). If, on the contrary, there is a desynchronization of oscillations, then ERD (event-related desynchronization) is present on the spectrogram.

At the moment when a person makes or simply represents a hand movement, ERD is observed in the motor cortex of the opposite hemisphere at a frequency of about 10-20 hertz.

This and other templates can be set manually by the program, but they are often created in the process of working with each specific individual. Our brain, as well as the features of its activity, is individual and requires adaptation of the system to it.

Recording electrodes

Most neurointerfaces use electroencephalography (EEG) to record activity, that is, a non–invasive method of neuroimaging - due to its relative simplicity and safety. Electrodes attached to the surface of the head register a change in the electric field caused by a change in the potential of the dendrites after the action potential has "moved" through the synapse.

At the moment when positive ions penetrate into the dendrite, a negative potential is formed in the surrounding environment. At the other end of the neuron, ions with the same charge begin to leave the cell, creating a positive potential from the outside, and the space surrounding the neuron turns into a dipole. The electric field propagating from the dipole is registered by the electrode.

Unfortunately, the method has a number of limitations. The skull, skin and other layers separating the nerve cells from the electrodes, although they are conductors, are not so good as not to distort the information about the signal.

The electrodes are able to register only the total activity of many neighboring neurons. The main contribution to the measurement result is made by neurons located in the upper layers of the cortex, whose processes are perpendicular to its surface, because they create a dipole, the electric field of which the sensor can detect best.

All this leads to the loss of information from deep structures and a decrease in accuracy, so the system is forced to work with incomplete data.

Invasive electrodes implanted on the surface or directly inside the brain allow for much greater accuracy.

If the desired function is associated with the surface layers of the brain (for example, motor or sensory activity), then implantation is limited to trepanation and attachment of electrodes to the surface of the cortex. Sensors read the total electrical activity of many cells, but this signal is not as distorted as with EEG.

If the activity located deeper is important, then the electrodes are inserted into the cortex. It is even possible to register the activity of individual neurons using special microelectrodes. Unfortunately, the invasive technique poses a potential danger to humans and is used in medical practice only in extreme cases.

However, there is hope that in the future the technique will become less traumatic. The American company Neuralink plans to implement the idea of safely introducing thousands of thin flexible electrodes without drilling the skull, using a laser beam.

Several more laboratories are working on the creation of biodegradable sensors that will allow the removal of electrodes from the brain.

Banana or orange?

Recording the signal is only the first stage. Next, you need to "read" it to determine the intentions behind it. There are two possible ways to decode brain activity: to allow the algorithm to isolate relevant characteristics from the data set itself, or to give the system a description of the parameters to be searched for.

In the first case, the algorithm, not limited by the search parameters, classifies the "raw" signal itself and finds elements that predict intentions with the greatest probability. If, for example, the subject alternately thinks about the movement of the right and left hand, then the program is able to find signal parameters that maximally distinguish one option from the other.

The problem with this approach is that the parameters describing the electrical activity of the brain are too multidimensional and the data is too noisy with various interference.

With the second decoding algorithm, it is necessary to know in advance where and what to look for. For example, in the P300 speller example described above, we know that when a person sees a symbol, the electric potential changes in a certain way. We teach the system to look for these changes.

In such a situation, the ability to decipher a person's intentions is tied to our knowledge of how brain functions are encoded in neural activity. How does this or that intention or state manifest itself in a signal? Unfortunately, in most cases we don't have an answer to this question.

Neurobiological studies of cognitive functions are actively underway, but, nevertheless, we can decipher a very small proportion of signals. The brain and consciousness remain a "black box" for us so far.

Alexander Kaplan, a neurophysiologist, Doctor of Biological Sciences and founder of the Laboratory of Neurophysiology and neurointerfaces of Lomonosov Moscow State University, who received the first grant in Russia for the development of a neurointerface for connecting the brain and computer, says that researchers are able to automatically decipher certain human intentions or mentally imagined images based on EEG signs.

However, at the moment there are no more than a dozen such intentions and images. These are, as a rule, states associated with relaxation and mental stress or with the representation of movements of body parts. And even their recognition occurs with errors: for example, it is possible to establish by EEG that a person intends to clench his right hand into a fist, even in the best laboratories, in no more than 80-85 percent of the total number of attempts.

And if you try to understand by EEG whether a person imagines a banana or an orange, then the number of correct answers will only slightly exceed the level of random guessing.

The main obstacle is the lack of keys to decipher the signals that nerve cells communicate with each other. Without knowing the codes, it is impossible to connect to the information flows. The problem here is not that these keys are difficult to pick up, but in their original absence. In each pair of nerve cells, their mutual understanding is due not only to the nerve impulses running between them, but also to their interaction with thousands of other nerve cells. This interaction is modified every second, reacting to fleeting thoughts, stomach cramps, wind blows. How to take this into account in order to properly connect to the brain? Alexander Kaplan

The saddest thing is that it has not been possible to increase the reliability of neurointerface systems in recognizing human intentions by EEG and expand the list of such intentions for more than 15 years, despite significant advances in the development of algorithmics and computer technology achieved during the same time.

Apparently, only a small part of a person's mental activity is reflected in the EEG. Therefore, neurointerface systems should be approached with moderate expectations and clearly delineate the scope of their real application.

Translation difficulties

Why can't we create a system that does what the brain easily does? In short, the brain circuit is too complex for our analytical and computational capabilities.

Firstly, we do not know the "language" in which the nervous system communicates. In addition to pulse series, it is characterized by many variables: the features of the pathways and the cells themselves, chemical reactions occurring at the time of information transmission, the work of neighboring neural networks and other body systems.

In addition to the fact that the "grammar" of this "language" is complex in itself, it may differ in different pairs of nerve cells. The situation is aggravated by the fact that the rules of communication, as well as the functions of cells and the relationships between them are all very dynamic and constantly changing under the influence of new events and conditions. This exponentially increases the amount of information that needs to be taken into account.

Data that fully describes brain activity will simply drown any algorithm that takes up their analysis. Therefore, decoding intentions, memories, and movements turns out to be an almost unsolvable task.

If pulses are transmitted from one computer to another, then it is possible to understand by addresses, by protocols, that this is, for example, a transfer from one memory address to another memory address, because the exchange protocol and the data format show us this. In the case of the brain, there is no chance of making a direct connection the way two processors connect. Therefore, there are no theoretical prerequisites for the fact that information will flow from the brain to the computer, and from the computer to the brain. There are no data formats, no addresses, no codes.Alexander Kaplan

The second obstacle is that we don't know much about the brain functions that we are trying to detect. What is memory or visual image, what do they consist of? Neurophysiology and psychology have been trying to answer these questions for a long time, but so far there has been no great progress in research.

When we create a visual image – where is it? In the whole head. Because it is synthetic: not only visual, but also tactile, olfactory and other sensations are interwoven into it. And how will we connect?The brain is not a system that can succumb to such elementary procedures as are used to train the recognition of license plates of cars flying by.
During such trainings, neural networks are presented with numbers many times and each time they are told what these numbers are.In our case, we need to connect these neural networks to neurons, give different electrical activity many times and say each time what it means.
But we don't know that. All the power of computers and neural network algorithms turns out to be useless, because we give these impulses and do not say what they mean.Alexander Kaplan

The simplest functions like motor and sensory have an advantage in this sense, since they are better studied. Therefore, the currently available neural interfaces interact mainly with them.

They are able to recognize tactile sensations, an imaginary movement of a limb, a response to visual stimulation, as well as simple reactions to environmental events such as a reaction to an error or a mismatch between an expected stimulus and a real one. But the higher nervous activity remains a big mystery for us today.

Two-way communication

So far, we have only discussed the situation of one-way reading of information without any reverse effect. However, today there is already a technology for transmitting signals from a computer to the brain – CBI (computer-brain interface). It makes the communication channel of the neurointerface two-way.

Information (for example, sound, tactile sensations, and even brain data) enters a computer, is analyzed, and is transmitted to the brain through stimulation of cells of the central or peripheral nervous system. All this can happen completely bypassing the natural organs of perception and is successfully used to replace them.

According to Alexander Kaplan, currently there are no theoretical limitations for equipping a person with artificial sensory "organs" connected directly to brain structures. Moreover, they are actively introduced into the daily life of a person, for example, to replace the disturbed natural sense organs.

So-called cochlear implants are already available to people with hearing impairments: microchips that combine a microphone with auditory receptors. Retinal implants have been tested to restore vision.

According to Kaplan, there are no technical limitations for connecting any other sensors to the brain that respond to ultrasound, changes in radioactivity, speed or pressure.

The problem is that these technologies have to be based entirely on the knowledge we have about the work of the brain. Which, as we have already found out, are quite limited.

The only way to work around this problem, according to Kaplan, is to create a fundamentally new communication channel, with its own language of communication, and teach not only the computer, but also the brain to recognize new signals.

There is hope that thanks to the extremely mobile architectonics of interneuronal connections, which are modified almost every second, on the one hand, and the latest advances in the development of machine learning technologies, on the other hand, it will be possible to build a self-learning communication channel between the brain and the computer.Alexander Kaplan

Such developments have already begun. For example, the Johns Hopkins University Applied Physics Laboratory tested a bionic arm capable of transmitting tactile information to the brain a few years ago.

When touching the sensors of the artificial hand, the electrodes stimulate the pathways of the peripheral nervous system, which then transmit a signal to the sensory areas of the brain. A person learns to recognize incoming signals as different types of touch. Thus, instead of trying to reproduce the tactile system of signals natural to humans, a new channel and language of communication is being created.

However, this path of development is limited by the number of new channels that we can create and how informative they will be for the brain, says Alexander Kaplan.

You can irritate some group of cells with some frequency, but this is not a natural code. Therefore, the computer does not transmit any information in this way. In all these situations, it is possible to train the brain very roughly. But it is impossible to teach for all occasions how he is trained to perceive nature. So there is also no chance to pick up these codes so that the brain understands what the computer wants to tell him.Alexander Kaplan

Future tense

Kaplan believes that at the moment there is no new way to develop neurointerface technologies. According to him, the very possibility of an interface for connecting the brain and a computer was discovered in the 70s of the last century, and the principles of brain work on which today's developments are based were described about thirty years ago, and since then there have been practically no new ideas.

Thus, the now widely used potential of the P300 was discovered in the 1960s, the motor imagination - in the 1980s–1990s, and the negativity of mismatch negativity - in the 1970s).

Once scientists had hopes that they would be able to establish closer information contact of the brain with processor technology, but today it became clear that they were not justified.

However, Kaplan says, it has become clear that neural interfaces can be implemented for medical use. According to the scientist, now the development of neurointerfaces is mostly along the line of technology introduction into the clinical sphere.

A "futuristic brain" is a healthy brain, even in the most advanced age of a person. Currently, neurodegenerative diseases that reduce the productivity of the brain are spreading even faster than the age of a person increases. Imagine how much could be done at any age, if not for the deterioration of memory, slow thinking, attention disorders and a decrease in intellectual abilities. And all this is just when professional experience has already been gained, when the necessary knowledge has been accumulated, when any person is already close to the highest intellectual achievements.Therefore, a healthy brain is a new human potential and today is the main task of modern neurotechnologies.
The full potential of the human brain, inherent in nature, is far from being exhausted, and artificial intelligence technologies can ensure the disclosure of this potential even without electronics implanted in the brain.Alexander Kaplan

Nevertheless, thanks to brain research and the development of technology, today's neural interfaces are capable of what once seemed impossible. We don't know for sure what awaits us in 30, 50 or 100 years. The historian of science Thomas Kuhn put forward the idea that the development of science is a cycle: periods of stagnation are replaced by paradigm shifts and scientific revolutions that follow. It is quite possible that in the future we are waiting for a revolution that will allow us to take the brain out of the black box. And it will come from the most unexpected side.

Portal "Eternal youth" http://vechnayamolodost.ru


Found a typo? Select it and press ctrl + enter Print version