Teaching Robots to Sense Object Properties Using Only Their Joints

Can a robot figure out how heavy or soft an object is without using a single camera or force sensor? According to a recent arXiv paper, the answer is yes—and the solution lies entirely in how the robot moves.

Automation manufacturing robot controlled by industry engineering using IOT software connected to internet network
Study: Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction. Image Credit: Summit Art Creations/Shutterstock.com

Researchers have introduced a method that estimates object properties like mass and stiffness using only proprioception, internal sensing from the robot’s own joints.

By analyzing motion during manipulation and running a differentiable simulation in the background, the system infers physical characteristics without any external sensors. It runs efficiently, works with low-cost hardware, and generalizes across different object types and interactions.

Background

Humans naturally estimate object properties like weight and softness through proprioception, our internal sense of body movement and position. Inspired by this capability, researchers have asked whether robots could similarly infer physical characteristics such as mass and elasticity using only internal sensing.

Traditionally, robotic systems rely on external tools like cameras or force sensors, or on specially designed objects with embedded sensors. These setups add complexity and cost. While prior work in system identification has shown that robots can calibrate their own dynamics using proprioceptive data, extending this to identify properties of manipulated objects has usually required some form of external observation.

Recent advances in differentiable simulation provide new opportunities. These tools enable efficient parameter estimation through gradient-based optimization, but most implementations still depend on external observations to function reliably.

This study proposes a fully proprioception-based approach—no cameras, no object tracking—bridging a key gap in current methods.

Differentiable Simulation for System Identification

The method centers on two main components: forward dynamics modeling and inverse parameter optimization.

In the forward modeling phase, the robot is represented using articulated rigid-body dynamics, defined by an ordinary differential equation (ODE) that relates joint torques to positions, velocities, and accelerations. Objects are modeled based on how they interact with the robot:

  • Fixed joint objects are treated as extensions of the robot's kinematic chain, contributing to overall system inertia.
  • Free-moving objects involve contact and collision forces, handled via penalty-based simulation.
  • Deformable objects are modeled using elastodynamics via the finite element method (FEM), with material properties described by Neo-Hookean elasticity.

The inverse identification process estimates object parameters, like mass or elasticity, by minimizing a loss function that compares simulated joint trajectories to those observed on the physical robot. This is accomplished using gradient-based optimization with gradients computed via automatic differentiation, implemented using the Nvidia Warp framework. Crucially, the entire pipeline depends solely on joint encoder data, removing the need for vision or force sensors.

Results

The researchers validated their method across three core scenarios: mass estimation with fixed joints, hidden object identification via contact, and elasticity estimation for deformable objects.

  • Fixed Joint (Mass Estimation): A robot arm lifted various objects (e.g., balls, cubes) under constant torque. Using only joint data, the system estimated a cube’s mass at 0.12 kg compared to the true 0.10 kg. Simulated joint trajectories closely matched real-world motion, and the optimization converged within seconds.
  • Contact/Collision (Hidden Object): A sphere was placed inside a closed container. The robot shook the container, and the system inferred the object’s mass—0.018 kg versus the actual 0.012 kg—purely from the container's motion, showcasing the method’s ability to handle occluded or unobservable objects.
  • Deformable Object (Elasticity Estimation): The robot compressed soft foam cubes, allowing the system to estimate elasticity parameters such as Lame coefficients. The approach remained accurate across a wide range of initial parameter guesses and significantly outperformed gradient-free alternatives like CMA-ES in both accuracy and speed.

Ablation studies showed that the system is robust to variation in initialization and provides results comparable to vision-based methods, achieving similar mass estimates (e.g., 0.016 kg vs. 0.018 kg) without any external sensing.

Practical Implications

The entire method runs in just a few seconds on a standard Apple M1 Max laptop. Its ability to function without vision or force feedback makes it highly applicable in settings where external sensing is limited or unavailable, such as in-field robotics, cluttered environments, or low-cost platforms.

By relying exclusively on proprioceptive data, the method lowers both hardware and computational costs, while expanding the range of tasks robots can perform autonomously.

Conclusion

This work presents a proprioception-only framework for estimating object properties through differentiable simulation of robot-object interactions.

Validated across multiple scenarios and implemented on an accessible robotic platform, the method demonstrates accurate, efficient identification without external sensors. It provides a new path forward for robotic systems that must operate in unstructured or sensor-constrained environments, and highlights the growing potential of internal sensing paired with physics-based optimization.

Journal Reference

Chen, P. Y., Liu, C., Ma, P., Eastman, J., Rus, D., Randle, D., Ivanov, Y., & Matusik, W. (2024). Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction. ArXiv.org. DOI:10.48550/arXiv.2410.03920. https://arxiv.org/pdf/2410.03920

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, May 13). Teaching Robots to Sense Object Properties Using Only Their Joints. AZoRobotics. Retrieved on May 13, 2025 from https://www.azorobotics.com/News.aspx?newsID=15947.

  • MLA

    Nandi, Soham. "Teaching Robots to Sense Object Properties Using Only Their Joints". AZoRobotics. 13 May 2025. <https://www.azorobotics.com/News.aspx?newsID=15947>.

  • Chicago

    Nandi, Soham. "Teaching Robots to Sense Object Properties Using Only Their Joints". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=15947. (accessed May 13, 2025).

  • Harvard

    Nandi, Soham. 2025. Teaching Robots to Sense Object Properties Using Only Their Joints. AZoRobotics, viewed 13 May 2025, https://www.azorobotics.com/News.aspx?newsID=15947.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.