Thought Leaders

The Development of an Algorithm that Helps Soft Robots Understand Their Surroundings

Thought LeadersAndrew Spielberg and Alexander AminiRobotics Researchers
Computer Science and Artificial Intelligence Laboratory (CSAIL)
Massachusetts Institute of Technology

Soft robot development could benefit from an algorithm that optimizes sensor placement allowing such machines to better ‘understand’ their environments. Ph.D. students in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Alexander Amini and Andrew Spielberg, have developed an algorithm to assist engineers with sensor placement to optimize soft robotics designs. 

Can you give our readers a summary of your recent research?

Our research focuses on soft robots, a new "breed" of robots of growing interest in the robotics community. Rigid robots have a rigid skeleton with discrete joints. By contrast, soft robots are flexible throughout their body, and typically do not have a joint structure. While rigid robots can be modeled quite compactly - you simply need to know the state of all of the joints of the robot to reconstruct its pose - soft robots can bend in a multitude of different ways.

Modeling and simulation are both difficult, but soft robots do provide various benefits - they can be safer for humans to work with, are more efficient in energy storage and release, can have more robust grips for manipulation and they can snake their way into areas rigid robots cannot.

Of course, with these advantages come challenges.  Not only are these creatures difficult for us to model, but it's difficult for these robots to model themselves. In the same way that humans rely on neurons to model our internal state and our state in the world, soft robots need sensors to understand their own state and their state in the world.

These sensors can be expensive to fabricate, though. So, in this work, we seek to answer two questions simultaneously:

1) How do we interpret strain sensor input to enable soft robots to complete tasks as well as possible?

2) Where do we put these strain sensors to give soft robots the best bang for their buck?

(a) The foundation of the models is a Particle Sparsifying Feature Extractor (PSFE) which takes as input the full, dense sensory information (left) and extracts a global feature representation (right) from a sparse subset of the inputs. The model simultaneously learns this representation and sparsification of the input. Since the input is an unordered point cloud, the PSFE also maintains order invariance through shared feature and point transformations as well as global pooling operations. The team employs the PSFE on various complex tasks: (b) Supervised regression and classification of object characteristics from grasp data. (c) Learned proprioception by combining PSFE with a variational decoder network. (d) Learned control policies for a soft robot.

What does this research mean for the future of soft robot development?

We're hoping that our approach will help soft robot engineers better determine where to place sensors when building their robots. We performed a user study and found that human intuition is actually quite poor in determining the best locations to put these sensors; human placements were worse than an algorithm that places sensors at random. Since embedding sensors in soft robots is also a lot of work and often difficult to post-modify, we're hopeful this algorithm will both streamline and improve soft robots' performance.

Co-Learning of Task and Sensor Placement for Soft Robotics

Co-Learning of Task and Sensor Placement for Soft Robotics

What benefits come with ensuring accurate sensor placement?

Our soft robots can learn to solve specific desired tasks more optimally by placing sensors in task-relevant locations. For example, we demonstrate improved ability to identify the shapes of objects and the stiffness of objects grasped by a soft gripper, reconstruct the world state (position/velocity) of all parts of the soft robots, and even perform closed-loop soft robot control for ground locomotion. And, we can do all of that with better accuracy or performance than randomly or human chosen sensor placements.

Can you give our readers information on the R&D of the algorithm?

This was an idea that sprung out of a previous project/paper that one of us (Andrew Spielberg) worked on.  In that paper, "Learning-In-The-Loop Optimization," we looked at ways of learning proprioceptive models and controllers for soft robots from a (simulated) camera feed. Its sensor is external to the robot itself. If you wanted to deploy a soft robot using that strategy, you'd have to have a camera following it all times, transmitting the feed to the soft robot for it to reason about and choose its next control signal. That's really cumbersome and isn't how we envision soft robots will work in the real world. We had this idea that possibly we could solve the same tasks using onboard strain sensors. Of course, that opened up the interesting co-design problem - where should we put the strain sensors to solve these tasks?

Left: Visualization of the effects of sensor locations (large green circles) on the 2D elephant with different sensor placements (columns) over five exemplar poses (rows). Frames show the reconstruction error with all but the shown sensor turned off. Brighter colors indicate larger deviation of the reconstructed robot (blue) from ground-truth (red). Sensors lower reconstruction errors in their immediate neighborhoods the most. The rightmost column shows all five sensors turned on, yielding the best reconstructions. Right: Latent space of the 2D elephant. Columns represent different latent dimensions of the elephant, generated by one-hot latent vector activations. Each column ranges latent activations from −1.0 to 1.0. Redder particles indicate higher speeds in x; bluer particles indicate higher speeds in y.

What soft robotics problems can the deep-learning architecture help solve?

So far, we have tested our architecture on problems in grasp classification, state reconstruction and terrestrial locomotion for soft robots.  Due to the large generalizability of our approach, we are excited to explore broader applications of our method across even more extensive soft robotics tasks.

How could the algorithm assist in the automation of robot design?

We imagine a workflow where humans who want to build a soft robot first model it (except for the sensors and controller) in a simulator. Then, our algorithm will produce a controller and sensor placements automatically for the engineer. Hopefully, we can also extend this algorithm to other soft robot design aspects, such as shape or actuator placement.

What potential is there for the development of other algorithms?

We've demonstrated how to learn sensor placements and soft robot models for general soft robotic tasks in a supervised setting. It would be interesting to extend this algorithm to reinforcement learning, or other semi-supervised settings.

What's next for the team at MIT?

We're excited by the efficacy of our algorithm as trained on virtual robots, but the real test will be on physical robots. We're excited to see how our technique can help enable untethered soft robot and boost soft robots' performance in the wild.

Where can readers find more information?

Readers can check out the paper and the video (and promo video).

About Andrew Spielberg

Andrew Spielberg works in the intersection of fabrication and robotics, creating tools that enable users to quickly design, program and fabricate rigid and soft robots. He seeks to make digital fabrication and design available to everyone. Andrew is co-advised at MIT CSAIL by Wojciech Matusik and Daniela Rus.  His work has been nominated for and received best paper awards at ICRA, CHI, and RoboSoft.  He received his M.Eng. in computer science and his B.S. in computer science and engineering physics at Cornell University, and has worked at Disney Research and the Johns Hopkins University Applied Physics Lab. 

About Alexander Amini

Alexander Amini is a Ph.D. candidate at the Massachusetts Institute of Technology, in the Computer Science and Artificial Intelligence Laboratory (CSAIL), with Prof. Daniela Rus. His research focuses on building reliable machine learning algorithms for end-to-end control (i.e., perception to actuation) of autonomous systems and formulating guarantees for these algorithms. His work has spanned learning control for autonomous vehicles, formulating confidence in deep neural networks, mathematical modeling of human mobility, and building complex inertial refinement systems. In addition to research, Amini is the lead organizer and lecturer for MIT 6.S191: Introduction to Deep Learning, MIT's official introductory course on deep learning. Amini is a recipient of the NSF Graduate Research Fellowship and completed his Bachelor of Science (B.S.) and Master of Science (M.S.) in Electrical Engineering and Computer Science at MIT, with a minor in Mathematics.

Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.

Joan Nugent

Written by

Joan Nugent

Joan graduated from Manchester Metropolitan University with a 2:1 in Film and Media Studies. During her studies, she worked as a Student Notetaker and continued working at the University, after graduation, as a Scribe. Joan has previously worked as a Proofreader for a Market Research company. Joan has a passion for films and photography and in her spare time, she enjoys doing illustrations and practicing calligraphy.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nugent, Joan. (2021, April 16). The Development of an Algorithm that Helps Soft Robots Understand Their Surroundings. AZoRobotics. Retrieved on November 10, 2024 from https://www.azorobotics.com/Article.aspx?ArticleID=393.

  • MLA

    Nugent, Joan. "The Development of an Algorithm that Helps Soft Robots Understand Their Surroundings". AZoRobotics. 10 November 2024. <https://www.azorobotics.com/Article.aspx?ArticleID=393>.

  • Chicago

    Nugent, Joan. "The Development of an Algorithm that Helps Soft Robots Understand Their Surroundings". AZoRobotics. https://www.azorobotics.com/Article.aspx?ArticleID=393. (accessed November 10, 2024).

  • Harvard

    Nugent, Joan. 2021. The Development of an Algorithm that Helps Soft Robots Understand Their Surroundings. AZoRobotics, viewed 10 November 2024, https://www.azorobotics.com/Article.aspx?ArticleID=393.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.