When we have true artificial intelligence?

When we have true artificial intelligence?


Research area of artificial intelligence has come a long way, but many believe that officially it was born when a group of scientists from Dartmouth College have come together in the summer of 1956. Over the past few years, computers have improved many times; today, they perform some computational operations much faster people. Given all this incredible progress, the optimism of scientists could understand. The brilliant computer scientist Alan Turing suggested the emergence of intelligent machines a few years earlier, and scientists have come to a simple idea: intelligence is, in fact, it’s just a mathematical process. The human brain — machine to a certain extent. Highlight the thinking process and to be able to imitate.

Then the problem did not seem particularly difficult. Dartmouth scientists wrote: “We believe that significant progress can be made in one or more of these problems if a carefully selected group of scientists will work on this together during the summer”. This proposal, incidentally, contained one of the first applications of the term “artificial intelligence”. There were lots of ideas: perhaps an imitation of the scheme of the action of neurons in the brain could teach machines to abstract rules of human language.

Scientists were optimistic, and their efforts were rewarded. They had a program that seemed to understand human language and able to solve algebraic problems. People confidently predicted that machine intelligence at the level of man will appear in twenty years.

Well matched and that the area of forecasting, when we have artificial intelligence at the human level, was born around the same time that the area AI. Actually it all goes back to the first article Turing on “thinking machines” in which he predicted that Turing test — in which the machine has to convince the man that she also a human — will be passed in 50 years, by the year 2000. Today, of course, people still predict that it will happen in the next 20 years, among the well-known “prophets” — ray Kurzweil. Opinions and forecasts so much that sometimes it seems that researchers put the AI on the answering machine the following sentence: “I have predicted what will be your question, but no, I can’t accurately predict this”.

The problem with trying to predict the exact date when the AI human level is that we don’t know how far we can go. It’s not like Moore’s law. Moore’s law — doubling of computing capacity every couple of years makes a specific prediction about a particular phenomenon. We sort of understand how to move forward — to improve the technology of silicon chips — and we know what is in our current approach (until you start working with the chips on the atomic scale). About artificial intelligence the same cannot be said.

Common mistakes

The study of Stuart Armstrong was dedicated to the trends in these forecasts. In particular, he was looking for two main cognitive distortions. The first was the idea that experts in the field of AI predict that AI will come (and make them immortal) right before they die. This critique of “the rapture of the nerds”, which is subject to Kurzweil, his predictions are motivated by fear of death, desire of immortality and fundamentally irrational. The Creator of the superintelligence becomes almost an object of worship. Criticize typically the people working in AI and who know firsthand about the frustrations and restrictions of modern AI.

The second idea is that people always choose the length of time in 15-20 years. This is enough to convince people that they’re working on something that will be revolutionary in the near future (because people are less attracted to efforts that will manifest itself through the centuries), but not so soon that you would be dead wrong. The people are happy to tell the AI to his death, but it is desirable that it was not tomorrow, not next year, and years 15-20.

Progress in measurements

Armstrong notes that if you want to estimate the reliability of a specific forecast, there are many options which you can look. For example, the idea that human-level intelligence will evolve by simulating the human brain, it at least provides you with a clear framework for assessing progress. Every time we get an increasingly detailed map of the brain, either successfully simulated certain part of it, and thus making progress towards a specific purpose, which, presumably, will result in an AI of human level. Maybe 20 years will be insufficient to achieve this goal, but at least we can assess the progress from a scientific point of view.

Now compare this approach with the approach of those who say that the AI, or something deliberate, “will”, if the network is quite complex and will have sufficient computing power. Perhaps this is how we imagine the human intellect and consciousness that emerged in the process of evolution, while evolution took billions of years, not decades. The problem is that we have no empirical data: we have never seen how the complex network of consciousness arises. We do not know whether this is possible, we can not know when we are waiting for it, because you can’t measure progress along the way.

There is a tremendous difficulty is to understand which tasks are really complicated to implement, and it’s haunted us since the birth of AI to the present day. To understand human language, randomness and creativity, self — improvement and all at once, just impossible. We learned how to handle natural speech, but to understand whether our computers that they process? We made an AI that seems “creative”, but whether his actions little bit of creativity? Exponential self-improvement, which will lead to a singularity in General seems to be something exorbitant.

We do not understand what intelligence is. For example, experts in the field of AI has always underestimated the ability of the AI to play go. In 2015, many thought that the AI does not learn to play go until 2027. But it’s only been two years, not twenty. Does this mean that AI few years to write the greatest novel? Understand the conceptual world? Closer to the human level of intelligence? Unknown.

Not a man, but smarter than people

Perhaps we incorrectly addressed the issue. For example, the Turing test still fails in the sense that the AI would be able to convince the person in conversation that he was talking to a man; but the computing ability of AI, and the ability to recognize patterns and to drive a car already far exceeds the level available to man. The more decisions algorithms “weak” AI, the more the growing Internet of things, the more data fed to the neural networks and the greater will be the impact of this “artificial intelligence”.

Perhaps we don’t yet know how to create human-level intelligence, but just as we don’t know how far we can go with the current generation of algorithms. While they don’t look like those scary algorithms that undermine the social order and become a kind of vague superintelligence. And similarly, this does not mean that we must adhere to the optimistic forecasts. We have to ensure that the algorithms will always put the value of human life, morality, morality that algorithms were not completely inhuman.

Any projections must be divided in two. Don’t forget that in the early days of AI it seemed that he would succeed very quickly. And today we think so too. Sixty years have passed since then, as scientists gathered at Dartmouth in 1956, to “create the intelligence for twenty years,” and we still continue them.