Posted in | News | Atmospheric Robotics

Researchers Develop Novel Learning Method for Robots

Robots can now learn directly from videos of human-robot interactions and apply this information to new tasks thanks to a new learning method created by SCS researchers.

Researchers Develop Novel Learning Method for Robots.
SCS researchers have developed a learning method for robots that allows them to learn directly from human-interaction videos and apply that knowledge to new tasks. Image Credit: Carnegie Mellon University.

Shikhar Bahl opened the refrigerator door as the robot observed. It observed his movements, the way the door swung open, the location of the refrigerator, and other details before analyzing the information and getting ready to imitate what Bahl had done.

It initially failed, occasionally completely missing the handle, grabbing it in the wrong place, or pulling it in the wrong direction. However, after some practice, the robot was successful in opening the door.

Imitation is a great way to learn. Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.

Shikar Bahl, Ph.D. Student, Robotics Institute, School of Computer Science, Carnegie Mellon University

Bahl collaborated with Deepak Pathak and Abhinav Gupta, both faculty members in the RI, to come up with a new learning technique for robots known as WHIRL, short for In-the-Wild Human Imitating Robot Learning. WHIRL is an effective algorithm for one-shot visual imitation.

It has the potential to learn directly from human-interaction videos and further generalize that information to new tasks, making robots that are well-suited to learning household chores. People constantly tend to execute several tasks in their homes. With WHIRL, a robot could note those tasks and collect the video data it requires to identify how to do the job itself.

The research group added a camera and their software to an off-the-shelf robot, and it learned how to do over 20 tasks — from opening and closing appliances, cabinet doors and drawers to putting a lid on a pot, pushing in a chair and even taking a garbage bag out of the bin.

Every time, the robot watched a human finish the task once and further went on practicing and learning to carry out the task on its own. The research group presented their study this month at the Robotics: Science and Systems conference in New York.

This work presents a way to bring robots into the home. Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people's homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.

Deepak Pathak, Assistant Professor, Robotics Institute, Carnegie Mellon University

Current methods available for teaching a robot a task depend on imitation or reinforcement learning. In imitation learning, humans manually work with a robot to teach it how it could complete a task.

This process should be done numerous times for a single task prior to the robot learning. In reinforcement learning, the robot is normally trained on millions of examples in simulation and further asked to adjust that training to the real world.

Both learning models tend to work well while teaching a robot a single task in a structured surrounding, but they are hard to scale and deploy. WHIRL has the potential to learn from any video of a human executing a task.

It can be easily scalable, not confined to one particular task, and can function in realistic home surroundings. The group is even working on a version of WHIRL trained by seeing videos of human interaction on YouTube and Flickr.

Advances in computer vision made the work possible. Utilizing models trained on internet data, computers can currently comprehend and model movement in 3D. The team made use of these models to comprehend human movement, simplifying WHIRL's training.

With the help of WHIRL, a robot has the potential to finish tasks in its natural surroundings. The appliances, doors, lids, drawers, chairs, and garbage bags were not altered or manipulated to suit the robot.

The first several attempts made by the robot at a task failed. However, once it had a few successes, it rapidly latched on to how to finish the task and eventually mastered it. The robot might fail to finish the task in the same manner as a human would, but that is not the aim.

Humans and robots consist of different parts, and they tend to move in a different manner. The only thing that matters is that the end result is the same. The door is opened. The switch is off. The faucet is turned on.

To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own.

Deepak Pathak, Assistant Professor, Robotics Institute, Carnegie Mellon University

WHIRL: Human-to-Robot Imitation in the Wild. Published at RSS 2022.

WHIRL: Human-to-Robot Imitation in the Wild. Published at RSS 2022. Video Credit: Carnegie Mellon University.

Source: https://www.cmu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.