Posted in | News | Industrial Robotics

Researchers Design Innovative Virtual-Reality System for Teleoperation of Robots

It is known that the majority of manufacturing jobs mandate the physical existence of workers for operating machinery. Imagine these jobs being done remotely. Scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system enabling teleoperation of a robot by using an Oculus Rift headset.

VR system from Computer Science and Artificial Intelligence Laboratory could make it easier for factory workers to telecommute. CREDIT: Jason Dorfman, MIT CSAIL.

The system provides the user sitting in a VR control room with a number of sensor displays, thereby giving them an experience of being inside the head of the robot. The users can use gestures to coordinate their movements with that of the robot to carry out a number of functions.

A system like this could eventually help humans supervise robots from a distance. By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.

Jeffrey Lipton, Postdoctoral Associate and Lead Author of a related paper, CSAIL

The Scientists are also of the notion that this system can assist in employing the ever-increasing jobless video-gamers by “game-ifying” production positions.

The Researchers displayed how their VC control technique works with the Baxter humanoid robot from Rethink Robotics. However, they stated that the technique can be applied to other robot platforms and is also well-suited to the HTC Vive headset.

Lipton collaborated with Daniela Rus, CSAIL Director, and Aidan Fay, Researcher, to write the paper. The paper was presented by the Researchers was at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver this week.

How it works

Applying VR for teleoperation conventionally involves two principal techniques.

First is a “direct” model wherein the vision of the user is directly integrated with the state of the robot. When the systems are used, a delayed signal can cause headaches and nausea, thereby limiting the viewpoint of the user to just one aspect.

The other is the “cyber-physical” model in which the user is positioned seperate from the robot and interacts with a virtual version of the robot as well as the environment. This necessitates considerably more data and individualized spaces.

The system developed by the CSAIL team stands midway between the two techniques. It not only overcomes the delay problem because the user constantly receives visual feedback from the virtual sphere. It also overcomes the cyber-physical difficulty of being different from the robot—when a user wears the headset and accesses the system, their experience is that of being inside Baxter’s head.

The system simulates the “homunculus model of mind”, that is, the concept stating that there exists a small human inside our brains who regulates our actions, views the images observed by us and understands them on our behalf. Although this is a strange concept for humans, it fits fine for robots: there is a human (sitting in a control room) “inside” the robot, viewing through its eyes and regulating its actions.

The users can use Oculus’ controllers to interact with controls visible in the virtual space to close or open the hand grippers to grasp, transfer and recover items. The distance between the location marker of the arm and the user’s hand can be altered to plan movements by simultaneously observing the live display of the arm.

In order to carry out the movements, the space for the human is mapped onto the virtual space. Subsequently, the virtual space is mapped onto the robot space to impart a sense of co-location.

When compared with earlier systems that necessitate various resources, this system is highly flexible. Other systems may acquire 2D information from each of the cameras and construct a complete 3D model of the scenario. The data is then processed and redisplayed.

On the contrary, the process adopted by the CSAIL Researchers circumvents all the above procedures by acquiring the 2D images displayed to each eye. The remaining process is performed by the human brain by automatically deducing the 3D information.

In order to investigate the system, the Researchers initially teleoperated Baxter to perform simple tasks such as grasping screws or stapling wires. Then, they made the test users to teleoperate the robot to grasp and stack blocks.

The tasks were successfully finished by the users at a considerably higher rate than the “direct” model. Predictably, users who had prior gaming experience were able to easily use the system.

In comparison with ultra-modern systems, the system developed at CSAIL was superior in picking up objects 95% of the time and faster at performing tasks by 57%. The Researchers also demonstrated that the system can be used to control the robot from hundreds of miles afar. They confirmed this by operating the system by using the wireless network at a hotel in Washington, DC to control Baxter located at MIT.

This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.

Oussama Khatib, Professor, Computer Science, Stanford University

The ultimate goal of the Researchers is to render the system highly scalable by increasing the number of users and types of robots used and by making the system compatible with prevalent automation technologies.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.