HAL 9000 will never appear: emotions are not programmed

HAL 9000 will never appear: emotions are not programmed

135
0
SHARE

HAL 9000 is one of the most famous cinematic artificial intelligence. This superior form of intelligent computer malfunctioned on the way to Jupiter in the iconic film, Stanley Kubrick’s “Space Odyssey 2001”, which is currently celebrating the 50th anniversary of its release. HAL can speak, understand human facial expressions, read lips and play chess. Its superior computing capabilities are supported by unique human features. He can interpret the emotional behavior, to reason and to appreciate art.

Giving HAL’s emotion, the writer Arthur C. Clarke and filmmaker Stanley Kubrick has made him one of the most human-like intelligent images technology. In one of the most beautiful scenes in science fiction movies he says he’s “afraid”, when mission commander David Bowman begins to disconnect its memory modules after a series of deadly events.

HAL is programmed to provide assistance to the crew of the ship “discovery”. He manages the boat, with the support of his powerful artificial intelligence. But soon it becomes clear that he is very emotional — he can feel fear, sympathy, albeit slightly. Fantasy fiction, but this emotional artificial intelligence in our reality is simply impossible at the moment. Any depth of emotions and feelings that you will find in modern technology, is absolutely false.

“Perfect” artificial intelligence

In the film, when Bowman starts manually edit functions HAL, he asks him to stop, and when we see the astonishing destruction of “mental” abilities of HAL, the AI tries to calm himself, singing “Daisy bell” is probably the first song that he wrote computer.

In fact, the audience begin to feel that Bowman kills HAL. Disabling it sounds like revenge, especially after what we learned from the previous events of the film. But if HAL is able to make emotional judgments, the AI of the real world definitely will be limited in ability to reason and make decisions. Moreover, despite the opinion of futurists, we will never be able to program emotions, as did the science fiction — the creators of HAL, because we don’t understand them. Psychologists and neuroscientists clearly trying to figure out how emotions interact with cognition, but not yet.

In one study conducted with Chinese-English bilingual users, the researchers studied how the emotional meaning of words can change the unconscious mental processes. When participants imagined positive and neutral words like “holiday” or “tree”, they unconsciously removed the word forms in Chinese. But when have words had a negative meaning, like “murder” or “rape”, their brain has blocked access to native language — without their knowledge.

Reasoning and emotions

On the other hand, we understand how the argument. We can describe how to come to rational decisions, write the rules and transform those rules into the process and code. But the emotions remain mysterious evolutionary heritage. Their source cannot be tracked, so it is extensive, and it’s not just an attribute of the mind, which can be implemented intentionally. To program something, you not only need to know how it works, but why. The reasoning is goals and objectives, emotions — no.

In 2015, a study was conducted with the students of Bangor University who speak Mandarin. They were asked to play a game with a opportunity to win money. In each round they had to take or leave the bet on the screen — for example, 50% chance to get 20 points, 50% chance to lose 100.

Scientists have suggested that the ability to speak their native language will give them emotions and they will not behave as if they were communicating in a second language, English. What happened: when the feedback was held in the native Chinese subjects were 10% more inclined to bet in the next round, regardless of risk. This shows that emotions affect reasoning.

Returning to AI, since emotions cannot be fully implemented in the program — no matter how difficult it may be — the reasoning of the computer will never change under the pressure of his emotions.

One of the possible interpretations of the weird “emotional” behavior of the HAL is that it was programmed to simulate emotions in extreme situations where he had to manipulate people, based on common sense, but appealing to their emotional “I” when the human mind fails. This is the only way to see a convincing simulation of the emotions in such circumstances.

In my opinion, we will never create a machine that can feel, to hope, to fear or to rejoice for real. Every approach is a simulacrum, because the machine will never be human, and emotions are the default human part.