Posted in | News | Industrial Robotics

Digital Helper Tells Dual-Armed Robot to Carry Out Manual Tasks

A groundbreaking bimanual robot exhibits tactile sensitivity near to human-level dexterity by making use of AI to inform its actions.

Digital Helper Tells Dual-Armed Robot to Carry Out Manual Tasks
Dual arm robot holding crisp. Image Credit: Yijiong Lin.

The new Bi-Touch system, developed by researchers at the University of Bristol and based at the Bristol Robotics Laboratory, enables robots to perform manual tasks by sensing what can be done by a digital helper.

The study outcomes reported in the journal IEEE Robotics and Automation Letters, display how an AI agent deciphers its environment via tactile and proprioceptive feedback, and further controls the robots’ behaviors, thereby allowing accurate sensing, gentle interaction, and efficient object manipulation for robotic tasks to be accomplished.

This development could revolutionize industries like domestic service and fruit picking and ultimately revive touch in artificial limbs.

With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch. And more importantly, we can directly apply these agents from the virtual world to the real world without further training.

Yijiong Lin, Study Lead Author, Faculty of Engineering, University of Bristol

Lin added, “The tactile bimanual agent can solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way.”

As far as human-level robot dexterity is taken into account, bimanual manipulation with tactile feedback will be the main. However, this topic has been less explored compared to single-arm settings, partly as a result of the availability of ideal hardware together with the complexity of developing efficient controllers for tasks with comparatively large state-action spaces.

The research group was capable of developing a tactile dual-arm robotic system with the help of recent progress made in AI and robotic tactile sensing.

The scientists built up a virtual world (simulation) that consisted of two robot arms that had been fitted with tactile sensors. Further, they designed reward functions and a goal-update mechanism that could promote the robot agents to learn to obtain the bimanual tasks and developed a real-world tactile dual-arm robot system to which they could instantly apply the agent.

By adopting Deep Reinforcement Learning (Deep-RL), known to be one of the most advanced methods in the field of robot learning, the robot learns bimanual skills. It is developed to teach robots to do things by letting them learn from trial and error, similar to training a dog with punishments and rewards.

For robotic manipulation, the robot learns to make decisions by making an effort for several behaviors to obtain designated tasks, for instance, lifting objects without breaking or dropping them.

When it turns out to be successful, it gets a reward, and when it fails, it learns what not to perform. As time goes by, it solves the best ways to grab things by making use of these rewards and punishments. The AI agent is visually blind depending just on proprioceptive feedback—a body’s ability to sense action, movement, and location as well as tactile feedback.

They were able to effectively allow the dual-arm robot to successfully and safely lift items as delicate as a single Pringle crisp.

Our Bi-Touch system showcases a promising approach with affordable software and hardware for learning bimanual behaviors with touch in simulation, which can be directly applied to the real world.

Nathan Lepora, Study Co-Author and Professor, University of Bristol

Lepora added, “Our developed tactile dual-arm robot simulation allows further research on more different tasks as the code will be open-source, which is ideal for developing other downstream tasks.”

Yijiong concluded: “Our Bi-Touch system allows a tactile dual-arm robot to learn sorely from simulation, and to achieve various manipulation tasks in a gentle way in the real world. And now we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch.”

Journal Reference:

Lin, Y., et al. (2023) Bi-Touch: Bimanual Tactile Manipulation With Sim-to-Real Deep Reinforcement Learning. IEEE Robotics and Automation Letters. doi.org/10.1109/LRA.2023.3295991

Source: https://www.bristol.ac.uk/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.