Posted in | News | Machine-Vision

New Algorithm Could Enable Robots Quickly and Intricately Grasp Objects

A person sitting at a desk with a pencil or pen in hand can try this move: Grabbing one end of the pen using the thumb and index finger, while pushing the other end against the desk. Without much difficulty, the fingers of the person can be slid down the pen, and the pen can be flipped upside down, without allowing it to drop.

A new algorithm speeds up the planning process for robotic grippers to manipulate objects using the surrounding environment. (Image credit: Courtesy of the researchers)

But this is a computationally challenging maneuver for a robot that sorts through a bin of objects and attempts to better grasp one of them. Even before it attempts the move, it has to compute a litany of properties and probabilities, for example, the geometry and friction of the pen, the table, and its two fingers, as well as how different combinations of such properties interact mechanically, relying on fundamental laws of physics.

Currently, engineers from MIT have discovered a technique to considerably accelerate the planning process needed for a robot to tune its grasp on an object by pushing that object against a stationary surface. In contrast to conventional algorithms that would take tens of minutes to plan out a motion sequence, the team’s new approach simplifies this preplanning process down to less than a second.

According to Alberto Rodriguez, associate professor of mechanical engineering at MIT, the accelerated planning process will allow robots, specifically in industrial settings, to rapidly figure out ways to push against, slide along, or use features in their surroundings to reposition objects that they have grasped. Nimble manipulation such as this is useful for various tasks that involve picking and sorting, as well as intricate use of tools.

This is a way to extend the dexterity of even simple robotic grippers, because at the end of the day, the environment is something every robot has around it.

Alberto Rodriguez, Associate Professor of Mechanical Engineering, MIT

The study outcomes have been reported in The International Journal of Robotics Research. Co-authors of Rodriguez on the study are lead author Nikhil Chavan-Dafle, a graduate student in mechanical engineering, and Rachel Holladay, a graduate student in electrical engineering and computer science.

Physics in a Cone

Rodriguez’s team performs research that aims to empower robots to leverage their environment to help them achieve physical tasks like picking and sorting objects in a bin.

Currently used algorithms often take several hours to preplan a motion sequence for a robotic gripper, primarily because, for each motion that it considers, it is necessary for the algorithm to first compute whether that motion would obey a number of physical laws, like Coulomb’s law of frictional forces between objects and Newton’s laws of motion.

It’s a tedious computational process to integrate all those laws, to consider all possible motions the robot can do, and to choose a useful one among those,” stated Rodriguez.

He and his coworkers discovered a simple method for solving the physics of these manipulations, before deciding how the hand of the robot should move. They achieved this through the use of “motion cones,” which are typically visual, cone-shaped friction maps.

The inner side of the cone portrays all the pushing motions that can possibly be applied to an object in a particular location, while obeying the fundamental laws of physics and allowing the robot to continuously grasp the object. The space at the outer side of the cone depicts all the pushes that would in some way make an object to slip out of the grasp of the robot.

Seemingly simple variations, such as how hard robot grasps the object, can significantly change how the object moves in the grasp when pushed. Based on how hard you’re grasping, there will be a different motion. And that’s part of the physical reasoning that the algorithm handles.

Rachel Holladay, Graduate Student, Electrical Engineering and Computer Science, MIT

The algorithm developed by the researchers computes a motion cone for various possible configurations between a robotic gripper, an object held by the gripper, and the surroundings against which it is pushing, to select and sequence various feasible pushes to reposition the object.

It’s a complicated process but still much faster than the traditional method—fast enough that planning an entire series of pushes takes half a second,” stated Holladay.

Big Plans

The new algorithm was tested by the team on a physical setup with a three-way interaction, where a T-shaped block was held by a simple robotic gripper and pushed against a vertical bar. Multiple starting configurations were used, where the robot gripped the block at a specific position and pushed it against the bar from a particular angle.

For every starting configuration, the algorithm instantly produced the map of all the possible forces that could be applied by the robot as well as the position of the block that would ensue.

We did several thousand pushes to verify our model correctly predicts what happens in the real world,” noted Holladay. “If we apply a push that’s inside the cone, the grasped object should remain under control. If it’s outside, the object should slip from the grasp.”

The predictions of the algorithm were found to reliably match with the physical outcome in the laboratory, thus planning out motion sequences—like reorientation of the block against the bar before placing it down in an upright position on a table—within a second, compared to conventional algorithms that take more than 500 seconds to plan out.

Because we have this compact representation of the mechanics of this three-way-interaction between robot, object, and their environment, we can now attack bigger planning problems,” stated Rodriguez.

The team hopes to use and extend its technique to allow a robotic gripper to handle tools of different kinds, for example, in a manufacturing environment.

Most factory robots that use tools have a specially designed hand, so instead of having the ability to grasp a screwdriver and use it in a lot of different ways, they just make the hand a screwdriver. You can imagine that requires less dexterous planning, but it’s much more limiting. We’d like a robot to be able to use and pick lots of different things up.

Rachel Holladay, Graduate Student, Electrical Engineering and Computer Science, MIT

This study was partially supported by Mathworks, the MIT-HKUST Alliance, and the National Science Foundation.

Source: http://www.mit.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.