New Robot Helper Designed to Understand and Respond to Human Movement

In the last ten years, robots have worked more and more with humans—for instance in manufacturing, in surgery, and assistive devices for physically impaired individuals. However, robots cannot, at present, react in a personalized way to individual users, which confines their practicality to humans.

(Image credit: © Imperial College London)

Now, Professor Etienne Burdet from Imperial College London and colleagues have created the first interactive robot controller to learn behavior from the human user and use this to predict their subsequent movements.

Reactive system: About the study

The scientists created a reactive robotic programming system that allows a robot to continuously learn the human user’s movements and adapt its own movements in view of that.

When observing how humans physically interact with each other, we found that they succeed very well through their unique ability to understand each other’s control. We applied a similar principle to design human-robot interaction.

Etienne Burdet, Professor, Department of Bioengineering, Imperial College London.

The research, conducted in partnership with the University of Sussex and Nanyang Technological University in Singapore, is reported in Nature Machine Intelligence.

Generally, humans can only regulate robots using quite simple methods, through either the “master-slave” mode, where the robot amplifies or reproduces the human’s movement, such as with an exoskeleton, or the robot does not take into consideration the human user at all, such as in rehabilitation. Humans are volatile and continuously change their movements, which makes it hard for robots to predict human behavior and react in supportive ways. This can result in errors in finishing the task.

The research team aimed to explore how a contact robot should be manipulated to provide a stable and suitable response to a user with unfamiliar behaviors during movements in activities such as physical rehabilitation, sports training, or shared driving.

Game theory

The authors built a robot controller based on game theory. Game theory consists of many players either competing or collaborating to finish a task. Each player has their own plan (how they select their next action based on the present state of the game) and all players attempt to enhance their performance, while supposing their opponents will also play optimally. To effectively apply game theory to their interaction, the scientists had to overcome the issue that the robot cannot foresee the human’s intentions merely by reasoning.

They used game theory to establish how the robot reacts to the effects of interacting with a human, using the difference between its expected and actual motions to guess the human’s strategy—how the human uses errors in the task to create new actions. For instance, if the human’s approach does not enable them to finish the task, the robot can increase its effort to assist them. Teaching the robot to predict the human’s strategy allows it to alter its own in response. If the robot can estimate the human’s strategy, it can alter its own strategy in response.

The team tried out their framework in simulations with human subjects, demonstrating that the robot can adapt when the human’s strategy changes gradually, as if the human was recovering strength, and when the human’s strategy is changing and unpredictable, such as after injury.

Working in harmony

Lead author Dr Yanan Li from the University of Sussex, who conducted the research while at Imperial’s Department of Bioengineering, said: “It is still very early days in the development of robots and at present, those that are used in a working capacity are not intuitive enough to work closely and safely with human colleagues or clients without human supervision. By enabling the robot to identify the human user’s behaviour and exploiting game theory to let the robot optimally react to any human user, we have developed a system where robots can work in much better harmony with humans.”

Going forward, the team will use the interactive control behavior for robot-assisted neurorehabilitation with the collaborator at Nanyang Technological University in Singapore, and for shared driving in semi-autonomous vehicles.

The European Commission funded the research.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Submit