05 April 2017

Artificial intelligence in medicine

XX2 century

Artificial intelligence is trending. He already paints pictures, drives a car and answers calls in organizations. It is also being used more and more widely in medicine, and it shows high efficiency. And it will show even more if we involve ordinary people in data collection and change the legislation. However, some problems related to its implementation seem to be unsolvable in the current global political and economic situation.

Medicine, which previously focused mainly on the treatment of acute diseases, now pays more attention to chronic ailments, many of which were not considered diseases not so long ago. Doctors are faced with the need to treat obesity, depression, and diseases of the elderly. Diabetes, heart failure, autoimmune disorders are increasingly being diagnosed outside the acute phase, at the earliest stages, and we are increasingly talking not only about maintenance therapy, but about the possibility of completely healing, correcting these systemic failures of the body. Preventive medicine is being developed, which makes it possible to recognize a predisposition to certain types of diseases even before their manifestation and take measures. Volumes of medical data are growing rapidly, and we are beginning to understand that our health and quality of life depend on the speed and quality of their analysis. And that all this is work for artificial intelligence.

med-II2.jpg

What is artificial Intelligence

Here, by artificial intelligence (AI), we will understand the ability of a machine to simulate intelligent behavior of people, that is, the ability to navigate in a changing context and make optimal decisions taking into account these changes, allowing to achieve the goal.

Today, two AI technologies are being used on a large scale – expert systems and neural networks. While expert systems are becoming obsolete, neural networks (NS) have conquered the market thanks to the ability to learn.

There are several types of AI:

  • Narrow AI (narrow AI) – designed to solve a specific task;

  • General AI (general AI, AGI) – will be able to solve any tasks that a person can handle;

  • Superintelligence – will be ahead of a person in terms of the complexity of the tasks to be solved.

In this article, by AI, I will mean "narrow AI" implemented on the basis of neural networks. The mechanism of the latter was inspired by biological neural networks. In computer form, NS represent a graph with three or more layers of neurons connected in layers in one way or another. Compounds have weights that play an important role in the training of NS.

Primitive training of neural networks can be represented as follows: data is fed to the input neurons, then they are processed by neurons on the inner layer, and some values are obtained on the output neurons. If the obtained values do not suit us, we change the weights of the connections in the neural network and re-teach it (you can read more about this in the book by David Kriesel A Brief Introduction to Neural Networks). The more relevant data is fed to the input neurons, the more relevant the result of the network operation is.

med-II.jpg

What needs to be done right now?

Tons of medical records are gathering dust on the shelves of hospitals and polyclinics. Meanwhile, if neural networks were trained on their material, artificial intelligence systems would save many lives and reduce treatment costs. However, opening information about the medical history is a bold step, and many will resist it, believing that their personal data can be used to their detriment. The discovery of data must be subject to a variety of conditions and accompanied by the signing of various kinds of agreements that guarantee (possibly with the participation of States) the use strictly for its intended purpose. But, one way or another, it is necessary to make medical records available for neural networks: Today, "training sets" of information are the bottleneck of AI in medicine.

What can AI do in medicine?

Diagnostician and assistant of the attending physician

It can be difficult for a doctor to correctly diagnose a disease, especially if he does not have too much practice or a specific case is far from his professional experience. Here, artificial intelligence can come to the rescue, having access to databases with thousands and millions of medical histories (and other ordered information). With the help of machine learning algorithms, it classifies a specific case, quickly scans the scientific literature published over a certain period of time on the desired topic, studies the available similar cases and suggests a treatment plan. Moreover, AI will be able to provide an individualized approach, taking into account information about the patient's genetic characteristics, movement patterns collected by his wearable devices, previous medical history – the entire history of life. AI is likely (at least at the current stage of technology development) – it will not replace a doctor, but it can become – has already become – a useful tool, an assistant in diagnosis and treatment.

I will give some examples.

IBM Watson for Oncology. IBM Watson is a supercomputer that can answer questions formulated in a natural language (that is, not in a programming language). He has access to various data sources: encyclopedias, databases of scientific articles, anthologies of knowledge. Thanks to the huge computing power, after processing the sources, it gives the most accurate answer to the question asked.

IBM Watson for Oncology is a program for using IBM Watson's capabilities to determine the optimal evidence–based cancer treatment strategy. Before the launch of this program, hundreds of thousands of medical documents were uploaded to Watson for training, including 25 thousand medical histories, more than 300 medical journals and more than 200 textbooks, a total of about 15 million pages of text. In 2011, a joint research project between IBM and Nuance Communications was announced, the result of which was to be a commercial product for clinical use in the field of medical decision-making. In preparation for clinical practice, the supercomputer was assisted by clinical researchers from Columbia University and the University of Maryland in Baltimore.

Since 2013, IBM Watson has been used at the Memorial Cancer Center. Sloan-Kettering in New York (Memorial Sloan Kettering Cancer Center, MSK) for assistance in making management decisions (Utilization management) in the treatment and care of lung cancer patients. Of course, his database is constantly updated with new medical histories.

In the same year IBM and M. D. Oncological Center Anderson (University of Texas MD Anderson Cancer Center) launched a pilot project "cancer eradication missions". However, it was soon announced that the project (for which $62 million had already been spent at that time) did not meet expectations and would be postponed.

In July 2016, IBM Watson for Oncology program was launched into commercial operation on the basis of Manipal Hospitals (the leading network of hospitals in India) – to help doctors and patients identify personalized cancer control techniques. Also, the Manipal Hospitals network offers cancer patients to find out "Watson's opinion" online, on its website.

In February of this year, Jupiter Medical Center, Jupiter Island, Florida, USA, also announced the start of using IBM Watson for Oncology. In a press release dedicated to the launch of the program, it was reported that Watson is already able to provide effective assistance to clinicians in developing treatment plans for breast, lung, colon, cervical, ovarian and stomach cancers. By the end of the year, IBM and MSK plan to train IBM Watson for Oncology to treat 9 more types of cancer, thereby covering potentially 80% of the incidence of cancer in the world.

IBM Medical Sieve (project under development). To evaluate the results of MRI, X-ray images, cardiograms, a doctor on average needs to spend significantly more time studying the image than a machine learning system. At the same time, the accuracy of computer analysis is on average higher, which will allow you to identify defects and formations that the doctor may miss. Especially at the end of the shift, when the doctors get tired and lose concentration. Moreover, by reducing the amount of time for data recognition and processing, more patients can be served.

Google DeepMind Health is a Google DeepMind subproject that applies AI technologies to medicine. At this moment, it is known about the cooperation of DM Health and the Moorfields Eye Hospital in London: thousands of anonymous eye images will be analyzed in order to find the primary symptoms of blindness. Also, in collaboration with University College London Hospital, AI will be involved in a project to develop an algorithm that can automatically distinguish between healthy and cancerous tissues in the head and neck.

NeuroLex.co. When people speak, they communicate the meaning of what is spoken not only by words, but also by intonation, the intervals between words, the speed and volume of speech. It is known from psychiatric practice that mental disorders are usually accompanied by certain speech changes. Therefore, it is possible to teach neural networks to match speech patterns and diagnoses (based on existing clinical practice), thus making the diagnosis process faster and more accurate.

Not to be confused NeuroLex.co and NeuroLex.org – wiki-a project to compile an up-to-date dynamic vocabulary of neuroscience.

Face2Gene is a program that allows you to diagnose many genetic diseases from photos (mainly in children). The target audience is practitioners and researchers. For more information, see the article "Diagnosis of genetic diseases by photo".

Human Diagnosis project (Human Dx) – an ambitious initiative of young doctors from San Francisco, combining, in their words, "the efforts of the collective mind" and machine learning. The Human Dx website claims that it is "the largest project in the world by the number of participating clinician authors." It is assumed that descriptions of symptoms, results of medical examinations, personal and family medical histories, indications of diagnostic devices and wearable devices, laboratory results, medical imaging, genetic and epigenetic data, scientific publications in the field of biomedical sciences, medical statistics, etc. will be collected here. Based on all this, a fundamental data structure will be developed, to which can be accessed by any doctor, patient, researcher, in general, any people, organizations, devices or applications. The short–term goal of the project is to provide assistance in timely and correct diagnosis of diseases and prescribing treatment, as well as in medical education. The long–term goal is to radically change for the better the cost, accessibility and effectiveness of medical care worldwide. The project has no ultimate goal. It is assumed that he will accumulate, systematize and try to make all possible medical data as accessible and easily applicable as possible as long as the participants have the means and strength for this.

It's in words. In fact, according to the head of the startup Jay Komarneni, "thousands of doctors" from 400 institutions in 60 countries are now supplying information to the project. This means that there may be, for example, two thousand (or five thousand: this is the number that appears in the March 19 publication on the website of the American Medical Association). A lot, but clearly not enough to reverse the situation in world medicine. Allegedly, there are already "hundreds of thousands" of described cases in the project database, but this is not enough for the functional classification of all diagnoses known to medicine.

It is not very clear whether a certain AI is already being trained on the data received, or whether this is also only in the plans. What exactly is there is a mobile application with which volunteer doctors can send information to the project servers. The founders hope to attract at least one hundred thousand volunteers to their ranks in the near future. And they probably have a reason for that: Human Dx is funded by five venture capital firms at once. One of them, describing briefly the investment policy, reports that she invests from $ 50 to $ 100 million in the company at the entire stage of its development. The other one's website says that she does not give charity grants. That is, the "business angels" have not only invested decently in Human Dx, but also seem to expect profit from it, which means development, and see potential.

On February 15 of this year, the Human Diagnosis project was announced as a semi-finalist of the MacArthur Foundation 100 competition&Change. There are eight semifinalists in total. The winner will be named in September and will receive $100 million. I want to believe that if this money gets into the hands of Human Dx, it will bring the day when medicine in the world will become more accessible and more effective.

In the meantime, Human Dx is trying to do useful things with the means that it already has: every morning the project sends out to hundreds of clinics the so–called "every morning case" - a description of cases of non-obvious diagnostics from those sent by volunteers. Also, links to cases are regularly posted on the project's Twitter. However, the cases themselves are available only to doctors registered on the site.

AI programs that provide "home hospital" conditions

As I have already said, currently the focus of treatment has shifted from acute diseases (the prevalence of which, thanks to advances in medicine over the past century, has been significantly reduced) to chronic. And "chronic" patients need to be constantly aware of the state of their own health. Wearable devices (wearables) come to their aid, which allow them to monitor their pulse, blood pressure, breathing and other health indicators. According to the information received, these devices notify the owners of the actions that need to be performed at the moment (take medicine, change the type of physical activity, etc.). The indicators taken by these devices can be transmitted directly to the doctor via a smartphone so that he always "keeps his finger on the pulse" and can give recommendations on the course of changes in indicators. The simplest tips can be "sewn" directly into applications and respond to the received data autonomously and quickly. But the main thing is that with the help of such wearable devices and mobile applications, it is just possible to collect arrays of data, as they grow, the quality of work of the AI trained on them will also grow.

Sense.ly (iOS, Android) is a "nurse app". There is an animated image of a nurse on the phone screen, she asks how you felt today, whether you slept well, if your blood pressure is normal, if there are any complaints. You can answer aloud – the AI recognizes speech and immediately sends the information to the attending physician. If there are triggers in your response that correspond to certain symptoms, a brief summary of them will be displayed on the screen, after which the "nurse" will remind you about taking medications or procedures or ask if you want to contact a doctor. If you want, the app will immediately connect you via video link.

AiTure (iOS, Android) – you need to take a picture of taking a pill; the application visually recognizes the type of medication, determines the time of taking and sends this information to the doctor. The task of the application is to ensure the regularity of taking medications.

Babylon Health (iOS, Android) is a mobile application that allows you to get an online consultation of a British or Irish doctor with at least 10 years of medical experience from anywhere in the world, on any day and at any time of the day. In English, of course. You may ask: what does artificial intelligence have to do with it? Despite the fact that before the consultation, you can take a simple test here in the application, as well as download the parameters of daily activity, including directly from various wearable devices. The system will analyze the data and give you a preliminary diagnosis, and recommend a doctor based on it. If you believe the developers, practice shows that Babylon Health is already making preliminary diagnoses no worse than an experienced therapist.

Unfortunately, some of the applications listed above (such as AiTure or Sense.ly ) are adapted to work with "Western-type" medicine, that is, a system where the patient has a permanent attending physician (general practitioner, GP) who cares about the patient's state of health. In Russia and other countries where mass medicine is based on other foundations, they are hardly applicable. On the other hand, applications for the diagnosis of diseases can be very useful for both Russian district police officers and doctors working in the African outback, where thousands of patients are constantly new, with unfamiliar symptoms, and where there is not even a normal laboratory to do tests.

AI in scientific developments in the field of medicine

In addition to clinical practice, AI finds application in biomedical research. For example, a machine learning system can be used to check the compatibility of drugs or to analyze the genetic code (yes, for anything, in fact – for any tasks that require deep learning, searching for correlations in big data, visual and auditory recognition, etc.).

Deep Genomics is a project of a system that will allow to study, predict and interpret how genetic variations change important cellular processes such as transcription, splicing, etc. Changes in these processes can lead to diseases, and accordingly, knowledge of the cause of the disease can make therapy more effective.

Barriers

Unfortunately, very often people are not ready to adopt new technologies. As with any innovation, there are many prejudices and well-founded fears around AI in medicine.

Fear of machine revolt

A well–known fear is the belief that AI is a super–intelligent robot that can become a threat to humanity (a stereotype imposed mainly by popular cinema). Hearing the expression "artificial intelligence", people remember SkyNet from "Terminator", get scared and oppose it.

Government officials are often also carriers of the stereotype described above. Therefore, innovative programs and long–term plans are signed with one hand, while laws and by-laws are signed with the other, stifling any real innovation in the cradle.

Loss of control over personal data and unclear distribution of responsibility for this

In the case of AI in medicine (and not only in medicine), the real problem of violating privacy for the sake of efficiency is added.

The loss of privacy can lead to real problems for patients themselves. So, the data from the medical history used to train artificial intelligence may fall into the hands of, say, an insurance company, with the expected consequence of an increase in the price of a medical policy and life insurance (if, for example, a person does not lead a "healthy" lifestyle from the point of view of insurers). An employer may refuse an applicant if he knows that he suffers from chronic diseases or is genetically predisposed to certain types of diseases.

And, in the end, it becomes unclear: who is the owner of medical data – a patient, a doctor, a clinic, a computing service or someone else? And who, to what extent, can dispose of them?

Google as a Criminal, nurse and Medical Device

Google, or rather, its division DeepMind Health, a project for the use of AI for healthcare purposes, cooperating with the Royal Free Hospital and other medical institutions in London, has repeatedly tried to cause trouble because of personal data.

In 2016, Google DeepMind and the Royal Free London NHS Foundation Trust signed a memorandum of understanding, as a result of which DeepMind Health has received full access to records of medical histories, ambulance and emergency calls, radiology and pathology department data - to all patient information recorded by the clinics "Royal Free", Barnet (Barnet Hospital) and Chase Farm (Chase Farm Hospital) for five years, including data on HIV infection, experienced abortions and suffered clinical depression. 1.6 million patients pass through these three clinics a year. The memorandum became known to journalists New Scientist. Subsequent publications led to a complaint to the British Office of the Information Commissioner's Office (ICO; British operator of personal data protection).

The investigation showed that, although according to the 1998 law on personal data, all information transmitted by the hospital for any purpose to third parties and organizations without the informed consent of the patient must be encrypted and anonymized or pseudonymized, the memorandum signed by Free and Google indirectly informs that in this case neither encrypted nor nothing will be anonymized, since the information is supposed to be used only to help patients. Also Google and Free referred to the presumed consent, as in situations where a doctor shows a medical history or laboratory results to another doctor or nurse. It is understood that such situations are self-evident and it is not necessary to obtain special informed consent of the patient for such actions. It was meant that what he does DeepMind Health, comparable to what doctors and nurses do for patients. However, the Information Commissioner noted that the presumed consent is usually applied in situations where both doctors or a doctor and a nurse are under the same roof. If the doctor wants to show the medical history to a colleague in another clinic or at home, the patient's consent is already required for this. When it comes to huge amounts of data – the consent of each patient. Other possibilities: either a public health emergency when personal data is used in the national interest, or a court decision. Google does not have the latter, but an attempt by representatives of the parties that signed the memorandum, operating with statistics on mortality from diseases that the manufactured products are aimed at combating DeepMind calculations to present the situation as an emergency do not meet with understanding among their opponents. In search of a solution, Google and Royal Free also tried to present the case in such a way that no information was transmitted anywhere, but was only uploaded into a new medical device, which is what it is DeepMind Health. But the opponents found something to find fault with here, too: By law, before commissioning any medical device must be approved by the Medicines and Healthcare products Regulatory Agency (MHRA, the British regulator in the field of healthcare). However, neither the foundation nor Google received such approval.

From the point of view of the law, Google is wrong everywhere, but so far everything is limited to public disputes: such violations are prosecuted only if they result in the commission of a serious crime. Then the violator faces a fine of 5,000 pounds or imprisonment for a period of 6 months.

Responding to the concerns of personal data defenders, the co-founder DeepMind Mustafa Suleiman said in a statement to Computer Weekly:

"We are working with Royal Free clinicians to understand how technology can help doctors notice the deterioration of the patient's condition in time, in this case, acute renal failure. We always adhere to the highest standards of patient data protection. They will only be used for healthcare improvement purposes and will never be associated with Google accounts or products."

Earlier, British defenders of personal data were outraged, on the contrary, by the fact that DeepMind Health uses anonymized data (we are talking about the above-mentioned scanning of a million anonymized retinal images in collaboration with Moorfields Eye Hospital). The dissatisfied claimed that the patient could demand the deletion of data related to him in one way or another from any array, including anonymized, and demanded that Google provide such an opportunity.

Cybercrime and cyberterrorism

Gaps in the information security of AI systems and their peripherals are fraught not only with privacy violations, but also with direct threats to life and health. The most popular examples among alarmists are: remote hacking of a pacemaker and deliberate "retraining" of the diagnostic and recommendation system to offer a deadly drug or procedure. In a critical case, this can lead to mass killings. Therefore, wearable devices must be reliably protected from external attacks. But what kind of protection is considered reliable? And who evaluates reliability? And who will be responsible if something like this does happen? A doctor? Clinic? Developer of an intelligent system? An information security specialist?

Self-medication and reduction of the number of jobs in medicine

It is unlikely that the average doctor is considering the possibility of becoming guilty because of an AI error, but in general, doctors also have no incentive to implement intelligent systems. Somewhere there is a system according to which the doctor's remuneration is directly proportional to the time spent on the patient, and if the AI makes the correct diagnosis in five seconds, the doctor's services will immediately depreciate, at least "on average in the hospital." And if one doctor, thanks to AI, can receive five times more patients, four will have to be fired because of this.

There are also entire regions whose residents already massively prefer Google search to the doctor. If artificial intelligence is available to them, making a diagnosis and offering therapy, only surgeons, dentists and procedural nurses will remain at the workplaces there. It is not a fact that this will benefit the general level of health, but – how to convince a person to go to the doctor who did not trust him before, and now also has access to machine diagnostics? And where to go to those doctors who will be out of work because of AI?

Not yet doctors, but already close

As reported by The Guardian, On January 5 of this year, the Japanese company Fukoku Mutual Life Insurance, which is mainly engaged in medical insurance, announced the dismissal of 34 employees in connection with the start of operation of the remote interface of the IBM Watson – Watson Explorer cognitive system.

Fukoku Mutual Life Insurance believes that, thanks to AI, it will increase productivity by 30% and recoup investments in it in less than two years, as well as save more than 100 million yen on current expenses this year.

Watson Explorer understands natural language, recognizes symbols and images and will be able to read tens of thousands of medical certificates and take into account the length of hospital stay, medical histories and any surgical procedures for calculating insurance payments. Moreover, it will do all this much faster and better than the dismissed 34 employees. The latter, however, will hardly be pleased with all this.

"Gray" legal zone and legislative barriers

Giant arrays of data, including personal data, are collected and used anyway, one way or another – in the era of global information services with billions of users, it cannot be otherwise, but the legality of this is questionable. Property rights, the right to use personal data open to AI, as well as issues of delineation of responsibility in the operation of artificial intelligence in medicine require legislative regulation.

And there are several serious obstacles to this regulation happening quickly, soberly, efficiently and with benefit for people.

Firstly, it is an insufficient level of expertise, understanding by legislators and government officials what actually needs to be done, since the industry is new and there are simply no ready-made cases. We will have to act by trial and error, and mistakes in healthcare are especially dangerous, because we are talking about people's lives and health.

Secondly, nation-states are not very willing to give the rights to operate on citizens' data to someone other than structures more or less controlled by them. They are especially afraid to share this resource with international organizations and private, most of all – foreign, companies. They see in this, and not without reason, a partial loss of sovereignty over citizens, the loss of a significant resource and the loss of a fraction of power functions. This is where, for example, the whole domestic turmoil of recent years stems from – with "foreign agents" and the requirement for transnational services to keep servers with data of citizens of the Russian Federation on its territory.

Finally, legislators, like ordinary people, are to some extent in thrall to stereotypes and fears, ranging from fear of real dangers, the same cyberterrorism, the possible increase in unemployment among doctors, and ending with elementary neophobia and obscurantism.

In the absence of legislative regulation, those who would undertake to develop and promote AI services in a specific territory (development of the customer base, establishing interaction with the structures of the local health system, language localization, etc.) would have to act at their own risk and be prepared for the fact that at any moment everything what he does may no longer be in the "gray", but in the "black" zone, that is, outside the law. With all the ensuing economic, legal and moral consequences.

Of course, progress cannot be stopped, there is a need for a broad medical application of AI, services based on it are increasingly in demand and will be provided one way or another. But if they find themselves in the symbolic territory of the "black market", it will not only scare away many specialists and patients from this industry, but also deprive people of guarantees of a controlled standard and protection, create conditions for the prosperity of imitators and suppliers of obviously substandard services.

Conclusion

Despite all the problems described, the very logic of the development of technology and society allows us to hope for the best. In the end, no efforts of the RIAA, RAO, etc. have killed either sound recording equipment or file-sharing networks, the Internet is developing contrary to the "great Chinese firewall", Roskomnadzor and various restrictive acts of national states, anti-GMO alarmists cannot stop the development of genetics. So artificial intelligence has already come to medicine, it already works with data and it cannot be stopped. You can only make its further penetration faster, more comfortable and safer – or vice versa – slow down, complicate, break firewood.

And it is in everyone's power to work for the first scenario and resist the second. For this:

1. To help organizations developing medical AI systems to collect data, to use wearable devices and applications mentioned in this article and similar ones.

2. Seek help from existing AI systems when diagnosing, whether you are a patient or a doctor, show them to your attending physicians.

3. To form a positive public opinion regarding the use of artificial intelligence in medicine, to conduct explanatory work, to help people overcome phobias and stereotypes.

4. In countries where legislators really depend on voters, try to initiate the adoption of legislation that is not hostile to medical AI, regulating issues that are unclear today (for example, the issue of privacy of health information, the issue of opening medical records for AI systems, the issue of delineation of responsibility in various situations arising from the use of artificial intelligence in diagnosis and treatment).

And if a broad social movement forms a multimillion-dollar and constantly growing demand, if people massively realize that they need it and begin to use and demand it, the situation itself will contribute to the development of a social consensus on issues that are currently stumping, and the legislative framework and popular participation in data collection will inevitably catch up with it. And then, most likely, the investments currently being made in AI in medicine will give the desired result.

Portal "Eternal youth" http://vechnayamolodost.ru  05.04.2017


Found a typo? Select it and press ctrl + enter Print version