Posted in | News | Remote Monitoring

Novel Virtual Reality System to Facilitate Tele-Operating of Robots

Telecommuting has not been adapted into certain industries. For instance, many manufacturing jobs require an operator to be present to handle the machinery.

Consisting of a headset and hand controllers, CSAIL's new VR system enables users to tele-operate a robot using an Oculus Rift headset. (Photo: Jason Dorfman/MIT CSAIL)

But what if such jobs could be handled remotely? Recently, Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) introduced a virtual reality (VR) system that allows users tele-operate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with numerous sensor displays, making it feel like they are inside the robot’s head. By using hand controllers, users can match their movements to the robot’s movements to finish a variety of tasks.

A system like this could eventually help humans supervise robots from a distance. By tele-operating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.

Jeffrey Lipton, Postdoc, CSAIL and Lead Author on a related paper about the system

The Researchers even envision that such a system could help employ growing numbers of jobless video-gamers by “gameifying” manufacturing positions.

The team used the Baxter humanoid robot from Rethink Robotics, but said that it can work on other robot systems and is also well-suited with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL Director Daniela Rus and Researcher Aidan Fay. They presented the paper at the recent IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver.

There have traditionally been two main methodologies to employing VR for tele-operation.

In a direct model, the user's vision is straightaway coupled to the robot's state. With these systems, a delayed signal could lead to headaches and nausea, and the user’s viewpoint is restricted to one perspective.

In a cyber-physical model, the user is independent from the robot. The user only interacts with a virtual copy of the robot and the environment. This requires a lot more data and dedicated spaces.

The CSAIL team’s system is halfway between these two approaches. It cracks the delay problem, since the user is continuously receiving visual feedback from the virtual world. It also solves the cyber-physical problem of being distinct from the robot: Once a user wears the headset and logs into the system, they will feel as if they are inside Baxter’s head.

The system imitates the homunculus model of the mind — the idea that there is a small human inside the human brain regulating one’s actions, viewing the images an individual sees and understanding them for the individual. While it is a strange idea for humans, for robots it is perfect: Inside the robot is a human in a virtual control room, seeing through its eyes and regulating its actions.

Using Oculus’ controllers, users can interact with controls that pop up in the virtual space to open and close the hand grippers to move, pick up and retrieve items. A user can strategize movements based on the distance between the arm’s location marker and their hand while viewing the live display of the arm.

To facilitate these movements, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to give a sense of co-location.

The system is also more flexible compared to earlier systems that require a number of resources. Other systems might extract 2D information from each camera, construct a full 3D model of the environment, and then process and redisplay the data. In comparison, the CSAIL team’s method sidesteps all of that by just taking the 2D images that are shown to each eye. (The human brain does the rest by automatically deducing the 3D information.)

To examine the system, the team first tele-operated Baxter to perform simple tasks like stapling wires or picking up screws. They then had the test users tele-operate the robot to gather and stack blocks.

Users effectively finished the tasks at a much higher rate compared to the direct model. Expectedly, users with gaming experience were more at ease with the system.

Analyzed against existing advanced systems, CSAIL’s system was better at grasping objects 95% of the time and 57% faster at performing tasks. The team also demonstrated that the system could pilot the robot from far away places; testing included regulating Baxter at MIT from a hotel’s wireless network in Washington.

This contribution represents a major milestone in the effort to connect the user with the robot's space in an intuitive, natural, and effective manner.

Oussama Khatib, Professor, Computer Science, Stanford University

The team ultimately wants to concentrate on making the system more scalable, with a number of users and different types of robots that can be matched with existing automation technologies.

The project was funded partly by the Boeing Company and the National Science Foundation.

Operating Robots with Virtual Reality

(Credit: MIT CSAIL)

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.