Posted in | News | Medical Robotics

Next-Gen Surgical Robots with Enhanced Flexibility

In a major advance for autonomous surgery, a robot trained on surgical videos successfully completed a critical phase of a gallbladder removal without human assistance. For the first time, the robot operated on a lifelike patient and responded to real-time voice commands, much like a surgical resident working alongside a mentor.

The Surgical Robot Transformer-Hierarchy performing a gallbladder surgery.
The Surgical Robot Transformer-Hierarchy performing a gallbladder surgery. Image Credit: Juo-Tung Chen / Johns Hopkins University

Developed by a team at Johns Hopkins University, the robot performed consistently across multiple trials with the calm precision and adaptability of a skilled human surgeon—even when confronted with challenges that often arise during real-world medical procedures.

This advancement moves us from robots that can execute specific surgical tasks to robots that truly understand surgical procedures. This is a critical distinction that brings us significantly closer to clinically viable autonomous surgical systems that can work in the messy, unpredictable reality of actual patient care.

Axel Krieger, Medical Roboticist, Johns Hopkins University

The research, funded in part by the Advanced Research Projects Agency for Health (ARPA-H), is a notable step toward surgical robots capable of functioning independently in dynamic, high-stakes clinical settings. The findings are published in Science Robotics.

In 2022, Krieger’s earlier system, the Smart Tissue Autonomous Robot (STAR), made headlines for performing the first autonomous laparoscopic surgery on a live animal. But that robot operated under highly controlled conditions and followed a tightly scripted surgical plan, which Krieger likened to teaching a robot to drive a specific route with a detailed map.

However, he noted, his new system, “is like teaching a robot to navigate any road, in any condition, responding intelligently to whatever it encounters,”

The Surgical Robot Transformer-Hierarchical (SRT-H) system is capable of truly performing surgery, adapting in real time to individual anatomical differences, making autonomous decisions during the procedure, and self-correcting when unexpected situations arise.

Powered by the same machine learning architecture behind ChatGPT, SRT-H is also interactive. It can respond to spoken commands like “grab the gallbladder head” and adjust based on corrections such as “move the left arm a bit to the left,” learning from each piece of feedback to improve its performance.

This work represents a major leap from prior efforts because it tackles some of the fundamental barriers to deploying autonomous surgical robots in the real world. Our work shows that AI models can be made reliable enough for surgical autonomy—something that once felt far-off but is now demonstrably viable.

Ji Woong "Brian" Kim, Study Lead Author and Postdoctoral Researcher, Stanford University

Last year, Krieger’s team used the system to train a robot on three core surgical skills: needle manipulation, tissue lifting, and suturing. Each task took only a few seconds to complete.

The gallbladder removal procedure, however, posed a far greater challenge. It involved a coordinated sequence of 17 steps, requiring the robot to precisely identify and grasp ducts and arteries, apply surgical clips in the correct positions, and use scissors to sever tissues—all without human intervention.

To learn the procedure, SRT-H watched surgical videos of Johns Hopkins clinicians operating on pig cadavers. The team enhanced this visual training with text captions describing each task. After this training phase, the robot performed the full procedure with 100 % accuracy.

While the robot took longer to complete the operation than a human surgeon, the outcomes were on par with expert performance.

Just as surgical residents often master different parts of an operation at different rates, this work illustrates the promise of developing autonomous robotic systems in a similarly modular and progressive manner.

Jeff Jopling, Study Co-Author and Surgeon, Johns Hopkins University

The system also proved resilient under less predictable conditions. It operated flawlessly across varied anatomical structures and navigated real-time changes—such as when the researchers altered its starting position or introduced blood-like dyes that obscured visual cues around the gallbladder.

To me it really shows that it's possible to perform complex surgical procedures autonomously, This is a proof of concept that it's possible, and this imitation learning framework can automate such complex procedures with such a high degree of robustness,” added Krieger.

Looking ahead, the team plans to train and test the system on additional surgeries, with the long-term goal of enabling fully autonomous operations.

SRT-H: A Hierarchical Framework for Autonomous Surgery via Language Conditioned Imitation Learning

SRT-H: A hierarchical framework for autonomous surgery via language conditioned imitation learning. Video Credit: Juo-Tung Chen / Johns Hopkins University

Journal Reference:

Kim, J., et al. (2025) SRT-H: A hierarchical framework for autonomous surgery via language-conditioned imitation learning. Science Robotics. doi.org/10.1126/scirobotics.adt5254.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.