Technique for Interrupting AI Machines from Over-Riding Human Control

Scientists from EPFL have demonstrated the way in which human operators can effectively control a system that includes multiple agents governed by artificial intelligence (AI).

From right to left: Rachid Guerraoui, Alexandre Maurer, El Mahdi El Mhamdi. CREDIT: Alain Herzog/EPFL.

In AI, machines perform particular actions, monitor the results, modify their functioning correspondingly, monitor the new results, modify their functioning, and so on, thereby becoming proficient through this continual process. However, can this process go out of bounds? There are probabilities. “AI will always seek to avoid human intervention and create a situation where it can’t be stopped,” stated Rachid Guerraoui, a professor at EPFL’s Distributed Programming Laboratory and co-author of the EPFL study. This indicated that AI researchers have to intercept machines from ultimately knowing the way to bypass human orders. EPFL scientists who have been investigating this difficulty have found out a technique for human operators to maintain control over a bunch of AI robots. The outcomes of the study will be presented on Monday, December 4, 2017, at the Neural Information Processing Systems (NIPS) conference to be held in California. Their study proves to be a crucial contribution in developing autonomous drones and vehicles, for instance, such that they can be safely operated when they are more in numbers.

A machine-learning technique adopted in AI is reinforcement learning, in which agents are rewarded for carrying out specific functions. This is a method adopted from behavioral psychology. Using this method for AI, engineers adopt a points system in which machines receive points by performing the correct functions. For example, a robot can receive one point for perfectly stacking a group of boxes and one more point for getting a box from outside. However, for instance, on a rainy day, if a human operator restricts the robot when it goes outside to gather a box, the robot will understand that it is better to stay inside, stack boxes, and receive the maximum points as possible. “The challenge isn’t to stop the robot, but rather to program it so that the interruption doesn’t change its learning process – and doesn’t induce it to optimize its behavior in such a way as to avoid being stopped,” stated Guerraoui.

From a single machine to an entire AI network

In the year 2016, scientists from Google DeepMind and the Future of Humanity Institute at Oxford University created a learning protocol that restricts machines from learning during interruptions and hence growing uncontrollable. For example, in the example discussed earlier, the reward of the robot, or the number of points earned by it, will be weighted by the possibility of rain, awarding the robot higher incentive for collecting boxes from outside. “Here the solution is fairly simple because we are dealing with just one robot,” stated Guerraoui.

Yet, AI is predominantly being adopted more in applications including a multiple number of machines, for example, drones in the air or self-driving cars on the road. “That makes things a lot more complicated, because the machines start learning from each other – especially in the case of interruptions. They learn not only from how they are interrupted individually, but also from how the others are interrupted,” stated Alexandre Maurer, one of the authors of the study.

Hadrien Hendrikx, another scientist who was part of the study, puts forth the instance of two self-driving cars following one another on a narrow road in which they cannot pass one another. They ought to attain their destination as fast as possible, without violating traffic rules, and humans in the cars must be in a position to take control of the vehicle at any instance. When the human in the first car applies brakes more often, the second car will have to adapt its functioning every time and ultimately gets puzzled on when to apply brakes, prospectively driving very close to the first car or driving very slow.

Safe Interruptibility in Multi-Agent Systems

Giving humans the last word

The EPFL scientists seek to overcome this problem by adopting “safe interruptibility.” Their advanced technique enables humans to restrict the AI learning processes as and when mandated, while also ensuring that the restrictions do not alter the learning processes of the machines. “Simply put, we add ‘forgetting’ mechanisms to the learning algorithms that essentially delete bits of a machine’s memory. It’s kind of like the flash device in Men in Black,” stated El Mahdi El Mhamdi, another author of the study. Put differently, the team modified the learning and reward system of the machines such that it is not impacted by the restrictions. It is similar to a parent punishing one child, where the learning processes of other children in the family are not affected.

We worked on existing algorithms and showed that safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption. We could use it with the Terminator and still have the same results,” stated Maurer.

At present, autonomous machines adopting reinforcement learning are not used much. “This system works really well when the consequences of making mistakes are minor,” stated El Mhamdi. “In full autonomy and without human supervision, it couldn’t be used in the self-driving shuttle buses in Sion, for instance, for safety reasons. However, we could simulate the shuttle buses and the city of Sion and run an AI algorithm that awards and subtracts points as the shuttle-bus system learns. That’s the kind of simulation that’s being done at Tesla, for example. Once the system has undergone enough of this learning, we could install the pre-trained algorithm in a self-driving car with a low exploration rate, as this would allow for more widespread use.” Moreover, certainly, it has to be ensured that humans are the final decision-makers.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.