New Platform Developed for Scalable Testing of Safety in Autonomous Vehicles

Manufacturers competing to build autonomous vehicles (AVs) have occasionally overlooked the safety feature, which has been indicated by a few headline-making accidents.

Ravishankar K. Iyer (Image credit: University of Illinois at Urbana-Champaign)

Scientists at the University of Illinois at Urbana-Champaign have employed artificial intelligence (AI) and machine learning to enhance the safety of autonomous technology via both software and hardware improvements.

Using AI to improve autonomous vehicles is extremely hard because of the complexity of the vehicle’s electrical and mechanical components, as well as variability in external conditions, such as weather, road conditions, topography, traffic patterns, and lighting. Progress is being made, but safety continues to be a significant concern.

Ravishankar K. Iyer, CSL Professor and George and Ann Fisher Distinguished Professor of Engineering, University of Illinois at Urbana-Champaign

The team has set up a platform that allows companies to more rapidly and economically address safety in the complicated and ever-changing atmosphere of autonomous technology. They are teaming up with a number of companies in the Bay area, including NVIDIA, Samsung, and several start-ups.

We are seeing a stakeholder-wide effort across industries and universities with hundreds of startups and research teams, and are tackling a few challenges in our group,” stated Saurabh Jha, a doctoral candidate in computer science who is directing student efforts on the project.

Solving this challenge requires a multidisciplinary effort across science, technology, and manufacturing.” Jha stated.


One of the reasons why this study is so tough is because AVs are intricate systems that utilize AI and machine learning to combine electronic, mechanical, and computing technologies to make driving decisions in real time.

A characteristic AV is a mini-supercomputer on wheels; they have over 50 processors and accelerators running as many as 100 million lines of code to assist planning, computer vision, and other machine learning operations.

As predicted, there are apprehensions with the sensors and the autonomous driving stack (computing hardware and software) of these vehicles. When a car is speeding at 70 mph on a highway, failures can be a major safety risk to drivers.

If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point,” Jha described. “However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are infinite number of such cases.”

Conventionally, when a person has an issue with software on a smartphone or computer, the typical IT response is to switch off and switch on the device again. But this type of solution is not prudent for AVs, as every millisecond influences the outcome and a slow response could result in death. The safety concerns of such AI-based systems have multiplied in the last few years among stakeholders because of different accidents caused by AVs.

Current regulations require companies like Uber and Waymo, who test their vehicles on public roads to annually report to the California DMV about how safe their vehicles are. We wanted to understand common safety concerns, how the cars behaved, and what the ideal safety metric is for understanding how well they are designed.

Subho Banerjee, CSL and Computer Science Graduate Student, University of Illinois at Urbana-Champaign

Safety Reports

The team examined all the safety reports submitted from 2014 to 2017, spanning 144 AVs driving 1,116,605 autonomous miles cumulatively. They learned that for the same number of miles driven, cars driven by humans were up to 4000 times less likely to have an accident than AVs.

This means that the autonomous technology failed, at a disturbing rate, to properly handle a situation and disengaged the technology, frequently depending on the human driver to take charge.

The challenge faced by scientists and companies when it comes to enhancing those numbers is that until an AV system has a particular issue, it is hard to program the software to resolve it.

Moreover, defects in the hardware and software stacks show up as safety critical problems only under specific driving situations. Conversely, tests carried out on AVs on highways or empty/less crowded roadways may not be adequate as safety violations under software/hardware errors are uncommon.

When mistakes do happen, they occur after hundreds of thousands of miles have been covered by the vehicle. The effort that goes into testing these AVs for hundreds of thousands of miles involves a substantial amount of money, time, and energy, rendering the process highly unproductive. The researchers are using computer simulations and AI to accelerate this process.

We inject errors in the software and hardware stack of the autonomous vehicles in computer simulations and then collect data on the autonomous vehicle responses to these problems,” stated Jha. “Unlike humans, AI technology today cannot reason about errors that may occur in different driving scenarios. Therefore, needing vast amounts of data to teach the software to take the right action in the face of software or hardware problems.”

Future Research

The research team is presently formulating methods and tools to produce driving conditions and issues that maximally influence AV safety. Using their method, they can spot a large number of safety critical situations where errors can result in accidents without having to list all possibilities on the road — massive savings in terms of money and time.

While testing one openly available AV technology, Apollo from Baidu, the researchers found over 500 examples of when the software was not able to handle an issue and the failure resulted in an accident.

Outcomes such as these are getting the team’s work recognized in the industry. At present, they are involved in achieving a patent for their testing technology, and hope to deploy it soon. Preferably, the scientists hope companies employ this new technology to mimic the identified issue and solve the issues before the cars are deployed.

The safety of autonomous vehicles is critical to their success in the marketplace and in society. We expect that the technologies being developed by the Illinois research team will make it easier for engineers to develop safer automotive systems at lower cost. NVIDIA is excited about our collaboration with Illinois and is pleased to support their work.

Steve Keckler, Vice President of Architecture Research, NVIDIA

Details about this research project have been reported many times by IEEE. The research is sponsored partly by the National Science Foundation (NSF), an IBM Faculty Award, and an equipment donation from NVIDIA.


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Azthena logo powered by Azthena AI

Your AI Assistant finding answers from trusted AZoM content

Azthena logo with the word Azthena

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI.
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy.
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.