Cognitive Robotic Dog Could Transform Emergency Response

Meet the robotic dog with an elephant's memory and the reflexes of an experienced first responder. 

A picture of the robotic dogs being manouvered by an engineer in a blue suit with a yellow safety hat.
Prototype robotic dogs built by Texas A&M University engineering students and powered by artificial intelligence demonstrate their advanced navigation capabilities. Image Credit: Logan Jinks/Texas A&M University College of Engineering

This AI-powered robotic dog, created by engineering students at Texas A&M University, observes, remembers, thinks, and follows directions. The study was published in the journal IEEE Xplore.

The robo-dog, which is designed to negotiate chaotic situations with accuracy, has the potential to transform search-and-rescue missions, disaster response, and a wide range of emergency operations.

Thanks to its advanced memory and voice-command capabilities, it could be a game changer in emergency missions.

Sandun Vitharana, an engineering technology master's student, and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, led the development of the robotic dog that neither forgets where it has been nor what it has seen.

It recognizes voice instructions and uses artificial intelligence and camera input for path planning and detecting objects, and its memory is programmed to ensure it never forgets. 

Fundamentally, the mechano-animal is a terrestrial robot equipped with a memory-driven navigation system, powered by a multimodal large language model (MLLM).

This system analyzes visual inputs and makes routing decisions by combining environmental picture capture, high-level reasoning, and path optimization with a hybrid control architecture that allows for both strategic planning and real-time modifications.

Robot navigation has evolved from simple landmark-based strategies to complex computational systems that utilize multiple sensory inputs. However, navigating unexpected and unstructured situations, such as disaster zones or isolated areas, has often proven problematic in autonomous exploration, where efficiency and agility are crucial.

While robot dogs and large language model-based navigation already exist in several different iterations, combining a bespoke MLLM with a visual memory-based system is a new concept, particularly in a general-purpose and flexible framework.

Some academic and commercial systems have integrated language or vision models into robotics. However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.

Sandun Vitharana, Master Student, Engineering Technology, Texas A&M University

The robot, like humans, exhibits reactive and deliberative actions, as well as deliberate decision-making. It responds rapidly to prevent collisions and conducts high-level planning by analyzing its present perspective and determining the optimal course of action.

Moving forward, this kind of control structure will likely become a common standard for human-like robots.

Sanjaya Mallikarachchi, Doctoral Student, Interdisciplinary Engineering, Texas A&M University

The robot's memory-based approach enables it to recall and reuse previously traveled courses, increasing navigation efficiency by decreasing recurrent exploration. This capability is vital in search-and-rescue efforts, particularly in unmapped locations and GPS-denied settings.

The possible uses might go well beyond emergency response. Robots might help hospitals, warehouses, and other major institutions become more efficient. Its superior navigation system could also assist those with visual impairments in navigating minefields or conducting reconnaissance in dangerous regions.

Dr. Isuru Godage, an assistant professor at the Department of Engineering Technology and Industrial Distribution, advised the project.

The core of our vision is deploying MLLM at the edge, which gives our robotic dog the immediate, high-level situational awareness and emotional intelligence previously impossible. This allows the system to bridge the interaction gap between humans and machines seamlessly. Our goal is to ensure this technology is not just a tool, but a truly empathetic partner, making it the most sophisticated and first responder-ready system for any unmapped environment.

Dr. Isuru Godage, Assistant Professor, Department of Engineering Technology and Industrial Distribution, Texas A&M University

Meet the AI Robot Dog Built to Save Lives

Video Credit: Logan Jinks/Texas A&M University College of Engineering

Vitharana and Mallikarachchi presented and showed the robot's capabilities at the 22nd International Conference on Ubiquitous Robots. The findings were published in "A Walk to Remember: MLLM Memory-Driven Visual Navigation."

Journal Reference:

Vitharan, S, S., et al. (2025) A Walk to Remember: Mllm Memory-Driven Visual Navigation. IEEE. DOI: 10.1109/UR65550.2025.11078086. https://ieeexplore.ieee.org/document/11078086.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.