The design centers on a low-cost, disposable 3D-printed finger and a streamlined calibration process that requires minimal data. Initial lab tests showed accurate force sensing, with strong potential for manipulating both rigid and fragile objects using visual feedback and force control.
Background
As robotics shifts toward small-batch and flexible production, there’s growing demand for grippers that can reliably handle a wide range of object types, including fragile or irregular ones.
Traditional rigid grippers, built for repetitive industrial tasks, often fall short. They’re difficult to miniaturize, risk damaging delicate items, and lack the adaptability needed for more varied tasks.
Compliant grippers, made from flexible materials, offer a more adaptable alternative. They can naturally conform to different shapes and, importantly, estimate the forces they apply by measuring their own deformation. While some prior systems used cameras and deep learning to interpret this deformation, they typically required large datasets and bulky hardware, including externally mounted cameras. These trade-offs have limited their practicality.
The new approach, called Seezer, takes a different route.
It features a monolithic 3D-printed finger with an embedded miniature camera and uses a force estimation method that requires only a small amount of training data. The result is a compact, lightweight, and more accessible solution for force-sensitive robotic manipulation.
System Design and Methods
Seezer is built around a modular, single-piece compliant finger, 3D-printed with a built-in gear segment and flexible joint (an “x-joint”) for actuation. Each fingertip has two fiducial markers, and all fingers are driven in sync by a single stepper motor via a worm gear.
A miniature camera, mounted at the gripper’s base, faces the fingertip markers to monitor their movement during gripping.
A key mechanical feature is a tension-based coupling that lets the same motor both actuate the fingers and easily attach or detach the disposable finger modules.
The system estimates force and torque in three vision-based steps:
- Marker Tracking: It continuously tracks the 2D positions and sizes of six fiducial markers using the integrated camera feed.
- Finger Force Estimation: A pre-calibrated linear model maps deviations between expected and observed marker positions (based on motor angle) to a 3D force vector at each fingertip.
- Gripper Force/Torque Estimation: A physics-based model then combines the fingertip forces into a full six-axis force/torque estimate, including the net gripping force.
Calibration was done with a custom test rig where Seezer performed controlled motions against a high-precision force/torque sensor. Marker data and motor angles were used to fit the finger-specific linear models. These models were then validated in follow-up experiments.
To explore practical use cases, the researchers tested Seezer in two tasks: picking and placing small gears, and gently harvesting soft redcurrants, both of which used its internal vision system for force feedback.
Results and Discussion
In lab tests, Seezer delivered accurate and efficient force estimation. For the “coarse” finger version, average gripping force errors ranged from 0.09 to 0.19 newtons (N), while full six-axis force/torque estimates showed relative errors between 8 % and 24 % compared to a high-precision reference sensor.
Notably, this level of accuracy required just 31 to 141 simple calibration samples, far fewer than the large training datasets typically needed for deep learning-based approaches.
A softer “fine” finger version was also tested, offering better sensitivity for detecting smaller forces. This came with a slight trade-off in precision, with relative errors ranging from 10% to 29%.
Two practical demonstrations showed how the system performs in different scenarios.
In one, Seezer autonomously carried out hour-long pick-and-place operations with small gears, using its built-in camera for visual feedback. In another, it successfully picked a delicate redcurrant berry, stopping finger closure as soon as its estimated grip force reached 400 millinewtons. This was enough to hold the fruit without causing damage.
That said, some limitations remain.
The system currently depends on stable lighting and a uniform background for reliable marker tracking. Adapting it to more dynamic, unstructured environments will require improved tracking robustness. Processing speed is also limited by the camera and onboard computer. And while the disposable 3D-printed fingers are low-cost and easy to swap, more work is needed to assess how well they hold up under repeated use and long-term material fatigue.
Conclusion
The Seezer gripper provides a compact, affordable approach to force-sensitive robotic manipulation. Its design, featuring a compliant 3D-printed finger with an embedded miniature camera, enables accurate force sensing through a calibration process that requires only minimal data.
Demonstrated success in both rigid object handling and delicate tasks like berry harvesting highlights the system’s adaptability.
While improvements in tracking robustness and processing speed are still needed, this work marks a step toward practical, low-cost grippers suited for delicate manipulation in flexible production environments.
Journal Reference
Duverney, C., Gerig, N., Dieter Hüls, Niemeyer, C., Cattin, P. C., & Rauter, G. (2026). Affordable 3D-printed miniature robotic gripper with integrated camera for vision-based force and torque sensing. Npj Robotics, 4(1). DOI:10.1038/s44182-026-00075-2. https://www.nature.com/articles/s44182-026-00075-2
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.