How to create a robot that wants to change the world?

How to create a robot that wants to change the world?

26
0
SHARE

Computer scientist Christoph Soldi trying to eliminate the need for the rules that govern the behavior of robots. His strategy is to give them a purpose: to make us more powerful. Christophe works in the Game Innovation Lab at new York University. Sasha Maslov interviewed, Solja for Quanta Magazine, from which we learn that we may not be able to keep uncontrollable flow of technological singularity.

The famous three laws of robotics by Isaac Asimov — limit behavior of androids and machines is necessary to ensure security of mankind — was also unfinished. These laws first appeared in the story by Asimov in 1942, and then in classic works like “I, robot” and sound something like this:

  • A robot may not injure a human being or, through inaction, allow a human being to come harm.
  • A robot must obey orders given by humans, provided they do not conflict with the First Law.
  • A robot must protect its existence unless such protection is not contrary to the First or Second Law.
  • Of course, these laws can be found many contradictions and loopholes (what, in fact, enjoyed himself and Asimov). In our current age of advanced software machine learning and Autonomous robotics, definition and implementation of the iron ethics of artificial intelligence became an urgent problem for such organizations as the Institute for the study of machine intelligence and OpenAI.

    Christophe Soldz has taken a different approach. Instead of “top-down” impose philosophical definitions of how you should or should not behave artificial agents Soldi and his colleague Daniel Polanyi explores the way the “bottom-up”, or “what the robot should do in the first place,” wrote in his article “empowerment as a replacement for the three laws of robotics.” “Empowerment”, a concept born at the intersection of Cybernetics and psychology, describes the internal motivation of the agent to the simultaneous resistance and environmental conditions. “As the body, it wants to survive. He wants to leave his mark in the world,” explains Soldi. Vacuum cleaner Roomba, programmed to find a charging station when battery power is low, is a rudimentary example of “extended rights” to continue to function in the world, it needs to charge and to continue their existence, that is to survive.

    Empowerment may sound like a recipe for getting the result which I’m afraid the supporters of the safe kind of artificial intelligence nick Bostrom: a powerful Autonomous system, concerned only with their own interests and going crazy in the process. But Soldi studying human-machine social interaction, asks the question: what would happen if the agent Exec “also will be engaged in the extension of the rights of another? You want the robot didn’t just want to stay in working condition — it is necessary that he also wanted to support their human partner.”

    Soldi and Polanyi understood that information theory offers a way to implement this mutual extension in the mathematical base, which will embrace not-philosophizing artificial agent. “One of the drawbacks of the three laws of robotics is that they are based on language, and language is highly ambiguous,” says Soldi. “We’re trying to find something that actually can perform”.

    Some technologists believe that AI is a large, even catastrophic threat to the existence of people. And you?

    I will refrain. That is, I really think that at the present time, there is a fear of robots and the increasing influence of AI. But I think in the short term we are probably more concerned about a possible change of jobs, decision-making, loss of democracy, loss of privacy. Don’t know how likely is the appearance of the unstoppable AI in the near future. But even if AI will control the health care system and to issue recipes, we should think about the ethical issues that arise in the course of his work.

    How the concept of empowerment will help us to cope with these problems?

    I think that the idea of empowerment fills a niche. She will not allow the agent to let the man die, but as soon as you can hold at this threshold, it will support the intention to create additional opportunities for expression and human impact on the world. In one of the books Asimov robots just will eventually put all the people in safe containers. It would be undesirable. If our ability to influence the world will be continuously improved, I think it will be much more interesting to achieve the goal.

    Have you tested your ideas with a virtual agent in a video game environment. What happened?

    The agent, motivated by their own extended privileges, will evade from the shell and falls into the pit in General will avoid any situation which may lead to the loss of mobility, death, or damage to, thus, to reduce operationalnext. It just might work.

    In conjunction with the human player, which is also endowed with extended rights, we saw that the virtual robot will maintain a certain distance, so as not to impede the movement of the person. It will not block you, will not pass you could not pass. He will stay as close as possible to you, to be able to help. This leads to a behavior where he can and to take initiative, and follow someone else.

    For example, we created a scenario in which we had a laser barrier, threat to humans, but safe for the robot. If people in this game approaches the lasers, the robot appears more incentive to lock the laser. Incentive increases when a person stands directly in front of the barrier, as if intending to cross it. And the robot actually blocks the laser, becoming a front man.

    Showed whether these agents are any unintended behavior, similar to what arises from the three laws in the book of Asimov?

    First, behavior was good. For example, the virtual robot was intercepted by enemies who tried to kill you. From time to time he could jump in front of a bullet if it was the only way to save you. But what we are particularly surprised from the beginning that he is also very feared man.

    The reason for this is connected with his “myopic” model: in essence, it analyzes how a sequence of specific actions in two or three steps can affect the world, both for you and for him. Therefore, the first step we programmed that the player randomly. But in practice, this resulted in the fact that the agent had treated the man as a sort of psychopath, who could at any point in time, for example, shoot the agent. So the agent had to very carefully pick a situation in which a person could not kill him.

    We needed to fix it, so we modeled the so-called assumption of trust. In fact, the companion agent is acting on the assumption that people will only choose those actions which will not restrict the privileges of the agent — perhaps this is more fitting for the companion model.

    We also noticed that if you had, say, 10 points of health, a companion not particularly worried if you have lost eight or nine of those points — he might even shoot you just once, just for fun. And then we realized that there is a gap between the world in which we live and model in a computer game. Once we have modeled the disability caused by the loss of health, the problem disappeared. It could also be solved by having a not as short-sighted model that could calculate the actions of another couple of steps into the future. If the agent can look further into the future, he would see that having more health could be useful for future events.

    Taking into account that the change in the number of health points in any way affect my extended rights, the agent decides: “I Shoot him don’t shoot — what’s the difference?”. And sometimes fires. Which, of course, is a problem. I don’t want random shots at players. We have added a fix to the virtual robot a little more worried about your condition than about her.

    How do you make these concepts accurate?

    If we consider agents as a control system, they can be decomposed into information components: in the world something happen one way or another about you. We’re talking about information, not as about things that you perceive, as well as effects of any kind — this can be a substance, something flowing between the world and you. May be temperature or nutrients in your body. Any thing which crosses the boundary between the world and the agent transfers the information. Similarly, the agent can affect the outside world in many different ways, also bringing information.

    You can consider the stream as channel capacity, is a concept from information theory. You may have broad powers, enhanced rights, if you are able to take different actions that lead to different results. If something goes wrong, you’ll lose your powers, because the loss of the ability corresponds to a quantitative reduction of bandwidth between you and the environment. This is the basic idea.

    How much should an agent know that his expanded powers acted in full?

    Extended rights have the advantage that they can be used even when you do not have full knowledge. The agent really needs a model of how its actions will affect the world, but he doesn’t need a full understanding of the world and all its intricacies. Unlike some approaches that try to simulate everything in the world as possible, in our case we just need to figure out how your actions affect your perception. Don’t need to know everything about everything; all it needs is an agent who explores the world. He’s doing something and trying to understand how his actions affect the world. The model is growing, and the agent better and better understand where to stretch the limits of its powers.

    Have you tested it in a virtual environment. Why not in the real world?

    The main obstacle to scaling the model and to place it in the real robot, is the complexity of calculating the channel capacity of the agent and person in a rich environment like the real world, pretty much. All these processes have yet to become effective. I’m optimistic, but until this problem is purely computational. Therefore, we check the operation of the system in a computer game, in a simplified form.

    It seems that the empowerment, ideally, make our cars a powerful guard dogs.

    I even know of some enthusiasts who deliberately simulate the behavior of a companion based on dogs. I think if the robots will treat us like our dogs, in the future we can all get along.