New Software Allows Robots to be Manipulated in Virtual Reality

Although autonomous robots are getting better at performing tasks on their own, there will still be many situations where humans might have to step in and take charge. New software designed by Brown University computer scientists enables users to regulate robots remotely using virtual reality, which helps users to become immersed in a robot’s surroundings regardless of being miles away physically.

Virtual control - Brown University undergraduate Eric Rosen operates a Baxter robot using a virtual reality interface developed in Brown's Humans to Robots lab. (Credit: Nick Dentamaro)

The software links a robot’s arms and grippers as well as its onboard sensors and cameras to off-the-shelf virtual reality hardware via the Internet. Users equipped with handheld controllers can manipulate the position of the robot’s arms to perform complex manipulation tasks just by moving their own arms.

Users can get inside the robot’s metal skin and obtain a first-person view of the environment, or can walk around the robot to inspect the scene in the third person — whichever is easier for completing the task at hand. The data conveyed between the robot and the virtual reality unit is sufficiently compact to be sent over the Internet with negligible lag, making it feasible for users to control robots from great distances.

We think this could be useful in any situation where we need some deft manipulation to be done, but where people shouldn’t be. Three examples we were thinking of specifically were in defusing bombs, working inside a damaged nuclear facility or operating the robotic arm on the International Space Station.

David Whitney, Graduate Student at Brown University who co-led the development of the system

Whitney co-led the research with Eric Rosen, an undergraduate student at Brown. Both are a part of Brown’s Humans to Robots lab, which is led by Stefanie Tellex, an assistant professor of computer science. A paper describing the system and assessing its usability was presented recently at the International Symposium on Robotics Research in Chile.

Even extremely advanced robots are often remotely regulated using some fairly unsophisticated means — frequently a keyboard or something like a 2D monitor and a video game controller. That functions fine, Whitney and Rosen say, for tasks like flying a drone or driving a wheeled robot around, but can be difficult for more multifaceted tasks.

For things like operating a robotic arm with lots of degrees of freedom, keyboards and game controllers just aren’t very intuitive.

David Whitney

Plus, mapping a 3D environment onto a 2D screen could restrict one’s perception of the space the robot occupies.

Whitney and Rosen believed virtual reality might provide a more intuitive and immersive option. Their software connects together a Baxter research robot with an HTC Vive, a virtual reality system that is provided with hand controllers. The software uses the robot’s sensors to produce a point-cloud model of the robot itself and its environment, which is conveyed to a remote computer connected to the Vive. Users can view that space in the headset and virtually take a walk inside it. Simultaneously, users see live high-definition video from the robot’s wrist cameras for comprehensive views of manipulation tasks to be completed.

For their research, the team demonstrated that they could create an immersive experience for users while keeping the data load adequately small so that it could be transmitted over the Internet without a disturbing lag. A user in Providence, R.I., for instance, could perform a manipulation task — the arranging of plastic cups one inside the others — using a robot 41 miles away in Cambridge, Massachusetts.

In further studies, 18 novice users were able to accomplish the cup-stacking task 66% faster in virtual reality than with a traditional keyboard-and-monitor interface. Users also reported liking the virtual interface more, and they found the controlling tasks to be less challenging than with keyboard and monitor.

Rosen believes the improved speed in performing the task was because of the intuitiveness of the virtual reality interface.

In VR, people can just move the robot like they move their bodies, and so they can do it without thinking about it. That lets people focus on the problem or task at hand without the increased cognitive load of trying to figure out how to move the robot.

Eric Rosen, Undergraduate Student, Brown University

The researchers plan to continue updating the system. The first iteration concentrated on a moderately simple manipulation task with a robot that was immobile in the environment. They would like to attempt more complex tasks and later incorporate manipulation with navigation. They also want to investigate mixed autonomy, where the robot does certain tasks independently and the user takes over for other tasks.

This system has been made freely available on the web by the researchers. They hope other robotics engineers might try it out and program it in unique directions of their own.

Besides Whitney, Rosen, and Tellex, the paper’s other authors were Elizabeth Phillips, a postdoctoral researcher with Brown’s Humanity Centered Robotics Initiative, and George Konidaris, as assistant professor of computer science. The research was funded partly by the Defense Advanced Research Projects Agency (DARPA) (W911NF-15-1-0503, YFA: D15AP00104, YFA: GR5245014 and D15AP00102) and NASA (GR5227035).

Comparing Robot Grasping Teleoperation across Desktop and Virtual Reality with ROS Reality

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.