Posted in | News | Machining Robotics

Robocraft Shows That Predictive Models May Learn to Plan Motion Effectively

When coming across a pile of play dough — the fluorescent, rubbery concoction of water, salt, and flour that popularized goo — the inner kid in many of us experiences an overpowering sense of excitement (even if it only seldom occurs in maturity).

Robocraft Shows That Predictive Models May Learn to Plan Motion Effectively.
Researchers manipulate elastoplastic objects into target shapes from visual cues. Image Credit: Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory.

Play dough is enjoyable and simple to manipulate for 2-year-olds, but robots struggle with the amorphous substance. With stiff objects, machines have become more and more trustworthy, but controlling soft, deformable things presents a number of technological difficulties. Most notably, as with most flexible structures, if users alter one element, they are probably going to influence everything else.

Robots recently tried their hand at playing with the modeling material, but not for sentimental reasons, according to researchers from Stanford University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). To enable a robot with a two-fingered gripper to see, mimic, and shape doughy things, their innovative system learns directly from visual inputs.

The behavior of a robot could be predictably planned by “RoboCraft” to squeeze and release play dough to form different letters, even ones it had never seen before. The two-finger gripper performed on par with, and occasionally even better than, human counterparts who teleoperated the system with just 10 minutes of data.

Modeling and manipulating objects with high degrees of freedom are essential capabilities for robots to learn how to enable complex industrial and household interaction tasks, like stuffing dumplings, rolling sushi, and making pottery.

Yunzhu Li, Study New Author and PhD Student, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology

While there’s been recent advances in manipulating clothes and ropes, we found that objects with high plasticity, like dough or plasticine—despite ubiquity in those household and industrial settings—was a largely underexplored territory. With RoboCraft, we learn the dynamics models directly from high-dimensional sensory data, which offers a promising data-driven avenue for us to perform effective planning,” Li added.

Before doing any sort of efficient and successful modeling or planning on an undefined, smooth material, the entire structure must be taken into consideration. RoboCraft, employing a graph neural network as the dynamics model, is able to more accurately forecast how the material will change form by converting the photos into graphs of tiny particles and coupling them with algorithms.

RoboCraft employs visual data instead of complicated physics simulators, which are often used to simulate and comprehend the dynamics and force acting on things. The system’s inner workings depend on three components to form soft material into, say, an “R,” for example.

Learning to “see” is the main focus of the first section, perception. It employs cameras to gather the environment’s raw, visual sensor data, which is then transformed into tiny clouds of particles to depict the forms. Then, using the aforementioned particle data, a graph-based neural network learns to “simulate” the dynamics—of motion—of the item.

Using the training data from the several pinches, algorithms are then used to plan the robot’s behavior as it learns to “shape” a glob of dough. The letters are a little sloppy, but they are unquestionably representative.

The team is working on creating dumplings out of dough and a pre-made filling in addition to crafting adorable forms. It is a lot to ask at the moment with just a two-finger gripper. A rolling pin, a stamp, and a mold would be necessary for RoboCraft (much as a baker requires many tools to prepare a meal).

The application of RoboCraft for help with home chores and duties is a more distant future area that the scientists envisage. This might be especially beneficial for the elderly or people with restricted mobility. Given the numerous potential obstacles, doing this would require a much more adaptable representation of the dough or object as well as research into what class of models could be ideal to capture the underlying structural systems.

RoboCraft essentially demonstrates that this predictive model can be learned in very data-efficient ways to plan motion. In the long run, we are thinking about using various tools to manipulate materials.

Yunzhu Li, Study New Author and PhD Student, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology

Li also stated, “If you think about dumpling or dough making, just one gripper wouldn’t be able to solve it. Helping the model understand and accomplish longer-horizon planning tasks, such as, how the dough will deform given the current tool, movements and actions, is a next step for future work.”

Along with Zhiao Huang, a Ph.D. candidate at the University of California, San Diego, Haochen Shi, a master’s student at Stanford, Huazhe Xu, a postdoc at Stanford, and Jiajun Wu, an assistant professor at Stanford, Li co-wrote the work. At the Robotics: Science and Systems conference in New York City, the team will present their study.

The work is partially funded by the Toyota Research Institute (TRI), the Samsung Global Research Outreach (GRO) Program, the Stanford Institute for Human-Centered AI (HAI), Amazon, Autodesk, Salesforce, and Bosch.

Robots learn how to shape Play-Doh

Robots learn how to shape Play-Doh. Video Credit: Massachusetts Institute of Technology.

Source: https://web.mit.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.