GTRI Researchers Create Assessment Tool to Test Autonomy Logic of Unmanned Systems

For vessels operating at sea, avoiding collisions is a basic operational requirement. When those vessels are operated by humans, collision avoidance is part of basic operator training. When those vessels become highly autonomous, collision avoidance must be incorporated into complex autonomy algorithms that must be thoroughly tested before the vessels enter the water.

Researchers at the Georgia Tech Research Institute (GTRI) have created an assessment tool called AVIA for systematically stimulating and testing the logic of fully autonomous systems while they are under development. GTRI Research Scientist Tara Madden, shown with an AVIA screen, led development of the user interface. (Credit: Rob Felt, Georgia Tech)

Researchers at the Georgia Tech Research Institute (GTRI) have created an assessment tool for systematically stimulating and testing the logic of fully autonomous systems while they are under development – before they reach the operational test and evaluation stage. Known as Autonomy Validation, Introspection, and Assessment (AVIA), the tool was developed with support from the Defense Advanced Research Projects Agency (DARPA) to assess the autonomy logic of unmanned systems, and specifically for a technology demonstration vessel developed in DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program.

AVIA stimulates the actual autonomy logic of the unmanned vessel and can run thousands of assessments faster than in real time and in parallel to study how an autonomous vessel would interact with a dozen or more other vessels at sea. Its graphical user interface allows testers to generate thousands of test scenarios and to assess the ability for the vessel to understand the situation it is in and to assess if its response to the situation is appropriate. This allows for an extensive analysis of the full autonomy logic and enables detection of any undesirable behavior earlier in the development phase. Developed for surface and underwater vessels, AVIA could also be useful for evaluating highly autonomous systems designed to operate on the ground, in the air, or even in space.

“It’s very rare to have a collision between two vessels on the open sea today, and we have to make sure that the performance of autonomous vessels equals or improves upon that of vessels operated by humans,” said Miles Thompson, a GTRI research engineer. “Using AVIA, we can stimulate the actual autonomy logic of the vessel for thousands of hours to find any issues before the system enters the water for the first time.”

The ACTUV autonomy system will have to successfully operate across a wide range of conditions. There are more than 45 different parameters to consider, including sensor capabilities, sea states, and the number and types of other vessels in the area. Other constraints include the location of land areas that may limit the vessel’s options for avoiding collisions.

Testing all of the potential combinations of these parameters would be impossible to do with an actual vessel at sea. AVIA can conduct thousands of tests in a short period of time to help developers of these systems find the few conditions in which the autonomy system may run into trouble. The tool can be asked to highlight only the tests that had unusual results.

AVIA’s initial objective was to execute a thousand one-hour real-time scenarios in less than 24 hours. One thousand hours of operations is equivalent to 42 days at sea. To reduce 42 days of assessment time to occur in less than 24 hours shows the power of AVIA. This initial objective was achieved in the first year of the program. Subsequent years incorporated improved scenario fidelity, increased metrics for assessing the perception and behavioral logic, and automated approaches to assessing an issue and spinning out additional scenario runs to execute in parallel to the initial 1,000 runs.

“The hard part is finding the needles in the haystack that present potential problems,” said Thompson, a robotics research and test engineer at GTRI. “In the billions of possible combinations, there are probably only a few thousand that could cause a problem for the system. If you can find as many of these as possible before you put the vessel into the water, any combinations remaining will have a smaller likelihood of being encountered.”

Autonomy systems for unmanned vehicles can present a “black box” problem in which only the inputs and outputs are known. AVIA isolates the issues it encounters to the subsystems governing sensing, perception, and the actual behavior commanded by the autonomy. By determining where an anomalous behavior originates, AVIA can help developers of the autonomy system better understand potential issues and resolve them in advance of operations.

“We can access the actual autonomy logic at several different tap points, so if there’s a problem, we can know whether it is with the sensors, the perception, or the behavioral response,” said Thompson. “AVIA automates this analysis and makes it easier to identify where a problem might be arising.”

AVIA can automatically generate randomized starting conditions and parameters for the test, and introduce unexpected events to stress the system. The strategy for selecting initial conditions, called Latin Hypercubes, helps provide confidence that enough testing has been done to represent all possible combinations. This approach of sampling the sample space can provide a factor of 10 reduction in the time required for the testing phase, which also reduces cost.

“We assume that the performance is stochastic in nature,” Thompson explained. “When you put the same input in, you will get a similar but randomized output out. The degree of this randomization is the variance in performance. We assess across the entire test domain, but we don’t have to evaluate every possible condition. This way, we bound the performance with the fewest number of tests by studying the variance at specific points in the test domain.”

AVIA was built on the Test Matrix Tool developed by GTRI for the U.S. Air Force. More than 15 GTRI researchers contributed to the three-and-a-half year AVIA program. GTRI Chief Scientist Lora Weiss was the principal investigator, and Principal Research Engineer Mike Heiges served as the project director.

At GTRI, AVIA runs on secure computer clusters. The researchers are exploring options of creating an installation version that can be fielded on several test ranges. They are also exploring options for combined live-virtual operations once the ACTUV technology demonstration vessel is at sea. This will allow for virtual ingestion of contacts by the autonomy logic, while retaining safe vessel operations when on the open water.

Though it was developed for evaluating autonomous vessels, the GTRI researchers built AVIA in a modular way, with plug-ins for specific sensors and subsystems. Because of this modular design, it can also be used to support the development of autonomous air and ground vehicles, which operate in very different environments, but which have the same challenges of assessing their autonomy logic.

“We could readily adapt AVIA to handle a new system,” said Thompson. “We wrote the AVIA tools in a general way so they could be applied to any autonomous vehicle. AVIA could be used to independently verify that an autonomous vehicle knows what it is doing and the operators know why the vehicle made the decisions that it made.”


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Azthena logo powered by Azthena AI

Your AI Assistant finding answers from trusted AZoM content

Azthena logo with the word Azthena

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI.
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy.
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.