05 June 2017

Will AI replace humans?

RIA Novosti published two complementary opinions on this issue with a short interval.

1. Scientists have named the time when artificial intelligence will surpass humans

Researchers from the Institute for the Future of Humanity at Oxford University in the UK and the Department of Political Science at Yale University in the USA estimated the time when artificial intelligence will surpass the capabilities of the human mind. The results of the developments can be found in the library of electronic preprints arXiv.org (Grace et al., When Will AI Exceed Human Performance? Evidence from AI Experts).

According to scientists, artificial intelligence will surpass human capabilities in translating texts from one language to another by 2024, replace truck drivers by 2027, the profession of writers specializing in fiction will disappear in 2049, and surgeons will become unnecessary by 2053.

ai.png
A drawing from an article in arXiv – VM.

The authors of the study estimate the probability that artificial intelligence will replace humans in 45 years at 50 percent.

In their work, the scientists relied on a global survey of more than 350 world experts in the field of artificial intelligence.

2. Google: "artificial intelligence should complement, not replace, a person"

Greg Corrado, the lead programmer of the Google Brain project (which is engaged in the creation of machine learning and artificial intelligence systems and their integration into Google services) told RIA Novosti what artificial intelligence is, why the uprising of self-improving machines does not threaten us, and shared his thoughts on how humanity will adapt to life in the era of "intelligent machines".

ai1.jpg

In recent years, Corrado and his colleagues have developed new machine translation systems, taught the Google Images image search system to recognize cats, dogs and other objects, and also created a neural network that improves the resolution of photos. Now they are working on creating a voice assistant for Android phones.

– Greg, your scientific career, judging by your publications, began in neurophysiology, and not in the development of artificial intelligence. How did it happen that AI systems have become important both for you personally and for Google as a whole?

– Initially, I was interested in completely different things – at first I studied physics, and then switched to the field of brain sciences. Then it seemed to me that the brain is the most interesting physical system, and I wanted to understand how the development of our brain led to the emergence of intelligent and self-aware beings.

After working in this way for some time, I realized that it was no less interesting for me to study all forms of intelligence, not only in its biological, but also in artificial manifestations. In addition, at that time it seemed to me that we could move much faster in the field of artificial intelligence than in the study of the human brain.

The situation with Google is different – the company has actually been engaged in artificial intelligence since its foundation. The very implementation of the search engine in the form in which it was conceived by the founders of the company, Sergey Brin and Larry Page, requires at least some form of artificial intelligence.

For us, the main task is not to create AI as some kind of product that we could pack into a box and sell, but to use such systems for organizing and processing information. The amount of information is constantly growing, and the more data we receive, the worse we see it and lose the idea of what is important and what is not.

– The mathematical idea describing deep neural networks and nonlinear learning methods appeared more than ten years ago, but they have become popular only in the last two or three years. What is the reason for this?

– I can say more – the principles of the organization of these networks appeared even earlier, in the 60s of the last century, and they were finally formulated in mathematical form in the 1980s and 1990s. They haven't actually changed since then. The only reason why their revival has happened right now is that computers have become noticeably faster in recent years and we have the opportunity to increase their power cheaply.

In the nineties and at the beginning of the two thousandth, we did not understand this, and therefore neural networks at that time were considered as an interesting but useless toy. But in fact, they simply did not have enough computing resources to do something useful. When data volumes and computing resources were small, simple mathematical approaches used them more efficiently than neural networks.

Did we realize for the first time that deep networks and machine learning can be useful? around 2010 or 2011, and it took us about five years to test this idea and prove that it really is. Now every person in the industry understands that this is so, and strives in this direction.

– Every year our civilization produces more and more data, the processing of which will soon be impossible without the use of artificial intelligence. Will humanity be able to develop further, as AI opponents believe, without using such systems?

– I don't think that artificial intelligence or machine learning can be something bad or good - it's just a set of new technologies, the way of using which will depend solely on ourselves. As a society, we have to decide for what and how they will be used – for example, we can completely abandon them for some ethical or political reasons. 

Personally, I think that such a solution would be extremely inefficient and similar to if we banned automatic sewing machines and forced all factories to sew clothes manually. Instead, we need to understand where their use will bring real benefits and will be consistent with our values. 

And, it seems to me, there are countless areas where the use of both AI and machine learning will bring enormous benefits to society, many of which we cannot even imagine yet.

– Google today uses neural networks in the work of Google Translate, Google Image Search and many other products. Where will the main breakthrough in their application be made in the near future? 

– When people talk about the development of artificial intelligence, they most often imagine something very impressive, which it really is not. We need to understand that we are just trying to make machines less stupid than they are today, and force them to interact with humans in the most natural way.

In addition, artificial intelligence allows us to correct machine errors and force them to learn from these mistakes. There is a kind of dialogue between man and machine.

I hope that in the next ten years we will have a system, possibly working on phones and other mobile devices, which will dynamically interact with the owner and take into account the peculiarities of his personality and some specific wishes.

For example, you ask such an assistant to help you reach some point on the map. When he offers you a route, you can ask him to lay it closer to the park or fulfill other wishes, and the system will understand you and fulfill your wishes. We want communication with such systems to go as naturally as possible for a person.

– In recent years, scientists have been increasingly talking about creating self-improving artificial intelligence systems, which many serious scientists, for example Stephen Hawking, consider very dangerous for humans. Is this really the case?

– In fact, self-improving artificial intelligence has not yet been created. From a technical point of view, it is possible that such systems will arise sometime, but there are no such systems yet, and they are unlikely to appear in the foreseeable future.

For example, today there are robots that work on assembly lines and assemble machines and other complex technical devices. These robots were assembled on the same line with the participation of the same robots, but this does not mean that the robots have improved themselves. The same is true for machine learning technologies – we use them to analyze data, including other artificial intelligence systems, but they still cannot improve themselves without human creativity.

The AlphaGo system, for example, is capable of a very limited form of self-learning, but it cannot radically change its architecture and make itself radically different, for example, adapted to solve other tasks. What exists today is only a small part of what needs to be done to create self-learning systems, and the rest of the way has not yet been completed.

Therefore, it seems to me personally that the creation of artificial intelligence today is more and more like art than engineering or science, and leaders in our industry often surprise themselves and colleagues with their unexpected discoveries and finds. In general, we can say that today the creation of AI is the exclusive prerogative of man.

– Is it possible to use such systems to solve the most unusual tasks – for example, as a "brain" for rovers or probes that would independently study distant planets and worlds?

– In principle, this is quite possible, and it seems to me that in the future artificial intelligence will be used to conduct research in space or on the ocean floor, where direct control of technology is impossible for one reason or another. AI systems will help such rovers or robots solve some routine tasks and avoid dangers, but real scientific research will still be conducted by people.

Only a person can set specific scientific tasks and determine how to solve them. We can tell the robot, "get close to this crater and try to find white rocks in it," but the robot itself, in principle, cannot understand why these rocks are interesting to us. A person sets tasks very well and looks for ways to solve them, and machines do not yet have such an opportunity. Robots can only be assistants in this case, but not researchers.

– This question is directly related to the fact that many people do not understand what artificial intelligence is, and believe that scientists are creating a complete analogue of a person with his mind, feelings and other traits. Why is this happening?

– It seems to me that this is a consequence of the fact that people do not understand why artificial intelligence exists at all. They think we're building machines like humans. I completely reject this and say that we are creating it not to replace a person, but to complement his capabilities.

For example, I use a computer every day, and there are things that both I and he can do by themselves – add or multiply numbers. The computer does it faster, and I appreciate it for that reason as well. Accordingly, I can carry out a "division of labor" by giving the calculations to the computer, and do more than I could have done without its help. And this, it seems to me, is the essence of artificial intelligence.

The idea of creating a complete analogue of a person, it seems to me, has its roots either in science fiction or philosophy. Philosophers have been thinking for a long time about whether we can create machines similar to humans, but this question has nothing to do with whether such "artificial people" will be useful to us and whether it is possible at all.

Therefore, from the point of view of practice, it is better to create intelligence that will complement our own mind and expand its capabilities, rather than copy it.

– Already today there are systems similar to cars with autopilot, in which a person's life depends on the actions of artificial intelligence. Is it necessary to create fundamentally new legal norms to determine responsibility and who will be responsible for the actions of AI?

– This is a very important question. I would like to emphasize right away that I am not a lawyer, but it seems to me that there have already been examples of similar problems in the past. For example, for many centuries there have been laws that determine who is responsible for someone's horse kicking a stranger, a dog attacking a stranger, a machine broke down and injured a worker, or some other incident occurred.

It seems to me that we need to use this experience in determining responsibility for the behavior of artificial intelligence systems. Accordingly, it remains only to discuss how we can adapt these principles to determine how we will interact with autonomous machines and other devices of this type.

This discussion, I am more than sure, will be very heated and lengthy, but it seems to me that it will not require the introduction of any absolutely new legal norms and principles. Of course, some people suggest considering AI systems as "personalities" and so on, but this is something very strange and incomprehensible for me personally.

– Futurologists and some of your colleagues predict that the further development of AI will leave many people engaged in skilled, but routine work – accountants, drivers, officials – out of work. Does this problem exist and how can it be solved?

– Of course, this problem exists. My parents and grandparents were accountants, and my grandmother started working with pen and paper and ended up with calculators, and my mother switched from calculators to spreadsheets on a computer. All these changes have radically changed the way accountants work, but the very essence of the profession has not changed from this – accountants have not disappeared, but at the same time they began to work better, faster and better.

It seems to me that something similar will happen with artificial intelligence. The content of many professions will change, but their essence will remain the same, some of them may even disappear, but a similar fate will befall only a small part of them.

In addition, something similar can happen, which happened to the field of web design in the late 1990s - before the advent and spread of the Internet, no one could have imagined that someone could do such things. On the other hand, it is stupid and wrong to approach this issue lightly and think that nothing will change. You need to understand that these technologies will change the world, that you will need to adapt to them and help others adapt to them.

Portal "Eternal youth" http://vechnayamolodost.ru  05.06.2017


Found a typo? Select it and press ctrl + enter Print version