Posted in | News | Humanoids

New Insights Into Motion Planning Techniques May Assist Robots to Perform Task

Robots, like humans, are unable to see through barriers. Robots require assistance from time to time to go where they need to go.

New Insights Into Motion Planning Techniques May Assist Robots to Perform Task.
The task set for this Fetch robot by Rice University computer scientists is made easier by their BLIND software, which allows for human intervention when the robot’s path is blocked by an obstacle. Keeping a human in the loop augments robot perception and prevents the execution of unsafe motion, according to the researchers. Image Credit: Kavraki Lab.

Rice University engineers have devised a mechanism for people to assist robots in “seeing” their surroundings and carrying out tasks.

The Bayesian Learning IN the Dark (BLIND) technique is a unique solution to the long-standing challenge of motion planning for robots that perform in environments where everything is not always clearly apparent.

Rice’s George R. Brown School of Engineering’s computer scientists Lydia Kavraki and Vaibhav Unhelkar, as well as co-lead authors Carlos Quintero-Peña and Constantinos Chamzas, presented a peer-reviewed study in late May at the Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation.

According to the report, the algorithm built by Quintero-Peña and Chamzas, both graduate students working with Kavraki, maintains a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion.”

To do so, they integrated Bayesian inverse reinforcement learning (in which a system learns from continuously updated knowledge and experience) with well-established motion planning approaches to help robots with “high degrees of freedom,” or many moving parts.

The Rice lab used a Fetch robot, a seven-jointed articulated arm, to grasp a tiny cylinder from one table and carry it to another, but it had to go through a barrier to do it.

If you have more joints, instructions to the robot are complicated. If you’re directing a human, you can just say, ‘lift up your hand.’

Carlos Quintero-Peña, Study Co-Lead Author and PhD Student, Department of Computer Science, Rice University

However, when obstructions obscure the machine’s “view” of its destination, the robot’s programmers must be precise about the motion of each joint at each point in its trajectory.

BLIND adds a human mid-process to modify the choreographic possibilities — or best guesses — offered by the robot’s algorithm, rather than setting a route upfront. “BLIND allows us to take information in the human’s head and compute our trajectories in this high-degree-of-freedom space. We use a specific way of feedback called critique, basically a binary form of feedback where the human is given labels on pieces of the trajectory,” Quintero-Peña stated.

These labels appear as a series of interconnected green dots that symbolize different pathways. The human accepts or rejects each movement as BLIND moves from dot to dot, refining the path and avoiding obstructions as effectively as possible.

It’s an easy interface for people to use, because we can say, ‘I like this’ or ‘I don’t like that,’ and the robot uses this information to plan.

Constantinos Chamzas, Study Co-Lead Author and PhD Student, Department of Computer Science, Rice University

The robot can complete its task after being awarded a series of acceptable motions, Chamzas claimed.

One of the most important things here is that human preferences are hard to describe with a mathematical formula. Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.

Carlos Quintero-Peña, Study Co-Lead Author and PhD Student, Department of Computer Science, Rice University

This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” noted Kavraki, a robotics pioneer who has advanced programming for NASA’s humanoid Robonaut aboard the International Space Station.

Kavraki stated, “It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences.”

The paper’s co-authors are Zhanyi Sun, a Rice undergraduate, and Unhelkar, an assistant professor of computer science. Kavraki is the head of the Ken Kennedy Institute and the Noah Harding Professor of Computer Science. Kavraki is also a professor of bioengineering, electrical and computer engineering, and mechanical engineering.

The research was funded by the National Science Foundation (2008720, 1718487) and an NSF Graduate Research Fellowship Program award (1842494).

Human Guided Motion Planning in Partially Observable Environments

Human Guided Motion Planning in Partially Observable Environments. Video Credit: Kavraki Lab/Rice University.

Source: https://www.rice.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.