Posted in | News | Remote Monitoring

New Simpler and Efficient Interface to Remotely Control Robots

Image Credits: maxuser/shutterstock.com

The traditional interface for remotely operating robots functions perfectly fine for roboticists. They use a computer screen and mouse to individually control six degrees of freedom, turning three virtual rings, and modifying arrows to move the robot into position to perform a specific task or grab items.

Georgia Tech Creates Point and Click Method for Controlling Robots

A comparison of the current ring-and-arrow technique and Georgia Tech's new point-and-click interface.

But for people who are not an expert, the ring-and-arrow system is difficult and error-prone. It is not ideal, for instance, for older people attempting to control assistive robots at home.

Georgia Institute of Technology researchers have built a new interface that is a lot simpler, more efficient and does not require substantial training time. The user just has to point and click on an item, and then choose a grasp. The robot performs the rest of the task.

“Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we’ve shortened the process to just two clicks,” said Sonia Chernova, the Catherine M. and James E. Allchin Early-Career Assistant Professor in the School of Interactive Computing who advised the research effort.

Her team tested college students on both systems, and discovered that the point-and-click technique resulted in considerably fewer errors, allowing participants to do tasks more swiftly and reliably than using the traditional technique.

Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them. Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That’s much easier.

David Kent, Georgia Tech Ph.D.

The traditional ring-and-arrow-system is a split-screen technique. The first screen displays the robot and the scene; the second is a 3D, interactive view where the user alters the virtual gripper and instructs the robot precisely where to go and grab. This method does not make use of scene information, giving operators the highest level of flexibility and control. But this freedom and the size of the workspace can become a problem and escalate the number of errors.

The point-and-click format does not consist of 3D mapping. It only offers the camera view, resulting in an easier interface for the user. Upon clicking a region of an item, the robot’s perception algorithm examines the object’s 3D surface geometry to establish where the gripper should be kept. It is similar to how people put their fingers in the correct position to grab something. The computer then recommends a few grasps. The user decides, making the robot to function.

The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can’t see, such as the back of a bottle. Our brains do this on their own — we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot’s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.

Sonia Chernova, Assistant Professor in the School of Interactive Computing, Georgia Tech

By examining data and proposing where to position the gripper, the load shifts from the user to the algorithm, which decreases errors. During a research, college students performed a task roughly two minutes faster using the new technique vs. the traditional interface. The point-and-click technique also resulted in about one mistake per task, compared to almost four for the ring-and-arrow method.

Besides assistive robots in homes, the researchers feel that this can be applied in search-and-rescue operations and space exploration. The interface has been launched as open-source software and was exhibited in Vienna, Austria, March 6-9 at the 2017 Conference on Human-Robot Interaction (HRI2017).

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.