Nov 5 2019
“Practice makes perfect” is a maxim that has pushed humans to become extremely dexterous. Currently, it is a strategy being applied to robots.
Computer scientists from the University of Leeds have applied the artificial intelligence (AI) methods of automated planning and reinforcement learning to “train” a robot to locate an object in a messy space, such as in a fridge or a warehouse shelf—and move it.
The objective is to establish robotic autonomy, so the machine can evaluate the distinct circumstances presented in a task and come up with a solution—similar to a robot imparting skills and knowledge to a new problem.
The Leeds scientists presented the study outcomes recently at the International Conference on Intelligent Robotics and Systems in Macau, China.
The big challenge is that in a narrow space, a robotic arm may not be able to clutch an object from above. Rather, it has to plan a series of moves to get to the target object, possibly by shifting other items out of the way.
The computer power required to plan such a task is quite immense, hence the robot will usually pause for several minutes. Then, when it does implement the move, it will fail repeatedly.
Developing the concept of practice makes perfect, the computer scientists from Leeds are combining two ideas from AI.
Automated planning is the first idea from AI. The robot has the ability to “see” the issue via a vision system, in fact as an image. The software in the operating system of the robot mimics the possible series of moves it could perform to get to the target object.
However, the simulations that have been “rehearsed” by the robot fail to capture the intricacy of the real world. Upon being applied, the robot fails to perform the task, for instance, by knocking things off a shelf.
The Leeds team has thus integrated planning with another AI method known as reinforcement learning.
Reinforcement learning involves the computer in a series of trial and error attempts—about 10,000 in total—to get to and shift objects. These trial and error attempts enable the robot to “learn” which movements it has planned are more likely to result in success.
The computer engages in the learning on its own, beginning by haphazardly choosing a planned move that might be successful. However, as the robot learns from trial and error, it turns out to be more skillful at choosing those planned moves that have a better chance of being successful.
Artificial intelligence is good at enabling robots to reason—for example, we have seen robots involved in games of chess with grandmasters. But robots aren’t very good at what humans do very well: being highly mobile and dexterous. Those physical skills have been hardwired into the human brain, the result of evolution and the way we practise and practise and practise. And that is an idea that we are applying to the next generation of robots.
Dr Matteo Leonetti, School of Computing, University of Leeds
According to Wissam Bejjani, the PhD scientist who authored the research paper, the robot attains an ability to simplify, to apply what it has premeditated to an exclusive set of conditions.
Our work is significant because it combines planning with reinforcement learning. A lot of research to try and develop this technology focuses on just one of those approaches. Our approach has been validated by results we have seen in the University’s robotics lab. With one problem, where the robot had to move a large apple, it first went to the left side of the apple to move away the clutter, before manipulating the apple.
Wissam Bejjani, Study Author and PhD Scientist, University of Leeds
Bejjani added, “It did this without the clutter falling outside the boundary of the shelf.”
Dr Mehmet Dogar, Associate Professor in the School of Computing, was also part of the research. He said the method had accelerated the robot’s “thinking” time by a factor of 10—decisions that earlier took 50 seconds currently take only 5 seconds.
The study received financial support from the Engineering and Physical Sciences Research Council in a project to examine “human-like physics” in robotics.