Posted in | News | Biomimetic Robotics

Neuron-Driven Control Could Redefine How Soft Robots Handle the Unexpected

Researchers have developed a neuron-inspired controller for soft robots that combines offline learning with real-time adaptation, cutting tracking errors by more than 44 % under stress while maintaining mathematically grounded stability.

Abstract neuron background. Flashes of purple pink light on a dark background.

Study: A general soft robotic controller inspired by neuronal structural and plastic synapses that adapts to diverse arms, tasks, and perturbations. Image Credit: Iuliia Pilipeichenko/Shutterstock.com

In a study published in Science Advances, the team introduced a bio-inspired control framework for soft robots that integrates offline “structural” and online “plastic” neural synapses.

Using an error-gated learning rule alongside a contraction metric for stability, the researchers demonstrated that their adaptive controller consistently outperforms baseline methods under distribution shifts and disturbances. Across multiple soft robotic arms and tasks, the system reduced tracking error and preserved high shape accuracy, even under significant external disruptions.

Background

Soft robotics aims to achieve embodied intelligence by exploiting mechanical compliance for adaptive behavior. In theory, soft materials allow robots to absorb uncertainty and interact safely with complex environments. In practice, however, controlling these systems remains difficult.

Traditional physics-based models demand precise material characterization and often break down under real-world variability.

Data-driven approaches, such as supervised learning, offer flexibility but are typically trained for fixed tasks and morphologies, limiting their generalizability. Meta-learning introduces some level of adaptability, yet it rarely provides formal stability guarantees. Meanwhile, contraction theory–based controllers can ensure stability, but often rely on restrictive assumptions about system dynamics.

This study positions itself at the intersection of these approaches.

The authors propose a neuron-inspired framework that integrates offline meta-learning to capture task-general structure with online, error-gated synaptic adaptation for real-time responsiveness. By embedding a learned contraction metric directly into the training process, the controller achieves cross-task generalization, robustness to perturbations, and provable stability - provided certain Lipschitz continuity and bounded-disturbance conditions hold.

Adaptive Learning-Based Control Methodology

The framework is organized around two tightly coupled components: model learning and policy training. Each unfolds across offline and online phases, with a clear division of responsibility between structural and adaptive elements.

For model learning, the researchers built a predictive dynamics model that estimates future states from current states and actions. This model is decomposed into a fixed basis function, implemented as a deep neural network trained offline to capture nonlinear system behavior, and a time-varying parameter vector that updates online.

In low-dimensional settings, such as tip position control, online updates rely on linear regression. In high-dimensional scenarios, such as full-body shape control, the team employed Reptile, a lightweight meta-learning algorithm based on multistep gradient descent, to support rapid adaptation.

Policy training mirrors this decomposition.

The control policy generates actions to drive the robot toward target states, combining a fixed cross-task neural basis with a task-specific adaptive parameter vector. During offline training, the researchers jointly optimized the policy and a contraction metric using a composite loss function that balances behavior cloning accuracy with contraction-based stability. This enforces exponential decay in contraction distance, leading to bounded overshoot and predictable convergence rates under the stated assumptions.

Once deployed, the controller continuously updates its adaptive parameters in real time. For low-dimensional action spaces, it uses gradient-free hill-climbing; for higher-dimensional spaces, it applies multistep gradient descent.

The offline phase establishes shared representations and stable base policies, while the online phase refines both model and policy parameters using live sensor feedback. Crucially, stability is maintained throughout via the learned contraction metric, allowing the system to adjust to payload changes, airflow disturbances, and actuator failures without sacrificing control accuracy.

Experimental Validation

To evaluate performance, the researchers tested their framework on two distinct hardware platforms: a cable-driven soft arm with low-dimensional control and a shape-memory alloy (SMA)–actuated arm requiring high-dimensional shape control. They also conducted simulation studies to assess generality across morphologies.

In trajectory-tracking experiments with the cable-driven arm, the method was compared against Gaussian process and meta-learning baselines.

Under nominal conditions, all approaches achieved similar tracking accuracy. The difference became clear under perturbations. With a 50-gram tip load, the proposed controller recorded a 4.8 mm error, compared to 8.6–9.0 mm for the baselines.

Under actuator failure, its error measured 5.4 mm, whereas competing methods ranged from 10.2 to 11.9 mm. Even in dynamic scenarios, including continuously changing tip loads, sequential cable failures, and periodic airflow disturbances, the controller maintained tracking errors below 5 mm after initial transients.

The team further tested object manipulation using the cable-driven arm equipped with an unfamiliar soft gripper. The system successfully placed objects of varying weights at different target positions, demonstrating adaptability to unknown end-effectors without retraining.

In high-dimensional shape control tasks with the SMA-actuated arm, the framework maintained over 93 % shape accuracy under varying fan speeds. A deep visual inverse kinematics baseline dropped to 80 % under the same conditions.

Even when exposed to unseen physical changes such as additional tip loads, distributed weights, and segment failures, the proposed method preserved accuracy above 92 %.

Simulations reinforced these findings. When material properties were altered, error increased by 63% for the proposed controller, compared to 215 % for the baseline.

Insights and Analysis

Unlike prior approaches limited to specific morphologies, this framework successfully combined offline shared representation learning with rapid online adaptation, validated on two distinct soft platforms.

The framework handles challenging conditions, such as payloads reaching 58.5 % of body weight, failure of half the actuators, and high-dimensional shape control, outperforming baselines by over 13 % in accuracy.

Key innovations include biologically inspired parameter decomposition and contraction theory integration, providing stability guarantees often missing in learning-based control.

Importantly, the performance gains emerge primarily under distribution shifts rather than in disturbance-free settings, indicating that improvements stem from structured adaptation rather than increased model capacity alone.

While current hardware limits control frequency (approximately 2.5 Hz for the cable-driven arm, with slower actuation for the SMA system due to material response times), the work advances embodied intelligence by unifying cognitive principles, machine learning, and control theory for robust soft robotic performance.

Conclusion

This study presents a neuron-inspired control framework that bridges adaptability and stability in soft robotics.

By decomposing models and policies into offline structural components and online plastic adaptations, and by embedding a learned contraction metric into training, the approach addresses key limitations of existing methods.

Experiments on cable-driven and SMA-actuated platforms show more than 44 % reductions in tracking error and sustained shape accuracy above 92 % under severe disturbances.

Together, these results suggest a promising direction for integrating cognitive principles, machine learning, and control theory in embodied systems.

Journal Reference

Tang, Z., Tian, L., Xin, W., Wang, Q., Rus, D., & Laschi, C. (2026). A general soft robotic controller inspired by neuronal structural and plastic synapses that adapts to diverse arms, tasks, and perturbations. Science Advances, 12(2). DOI:10.1126/sciadv.aea3712. https://www.science.org/doi/10.1126/sciadv.aea3712

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2026, February 17). Neuron-Driven Control Could Redefine How Soft Robots Handle the Unexpected. AZoRobotics. Retrieved on February 17, 2026 from https://www.azorobotics.com/News.aspx?newsID=16335.

  • MLA

    Nandi, Soham. "Neuron-Driven Control Could Redefine How Soft Robots Handle the Unexpected". AZoRobotics. 17 February 2026. <https://www.azorobotics.com/News.aspx?newsID=16335>.

  • Chicago

    Nandi, Soham. "Neuron-Driven Control Could Redefine How Soft Robots Handle the Unexpected". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16335. (accessed February 17, 2026).

  • Harvard

    Nandi, Soham. 2026. Neuron-Driven Control Could Redefine How Soft Robots Handle the Unexpected. AZoRobotics, viewed 17 February 2026, https://www.azorobotics.com/News.aspx?newsID=16335.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.