Study Produces Safety-Critical Simulation and Adaptive Control for Autonomous Vehicles

People are already experiencing the future. Partially autonomous cars have already hit the roads with automated systems like braking or lane departure warning systems.

Researchers at TU Graz and AVL are working to make autonomous driving systems safer.
Researchers at TU Graz and AVL are working to make autonomous driving systems safer. Image Credit: © Lunghammer – TU Graz.

In such systems, the software is the central vehicle component and should constantly, reliably fulfill high quality criteria.

At TU Graz’s Institute of Software Technology, Franz Wotawa and his research group collaborated with the cyber-physical system testing team from AVL to address the huge challenges posed by this future technology—the assurance of safety via the automatic generation of comprehensive test scenarios for simulations and system-internal error compensation through an adaptive control technique.

Ontologies Instead of Test Kilometers

Test drives are not just enough to offer adequate proof for the accident safety of autonomous driving systems.

Autonomous vehicles would have to be driven around 200 million kilometers to prove their reliability—especially for accident scenarios. That is 10,000 times more test kilometers than are required for conventional cars. Although the tests so far cover many scenarios, the question always remains whether this is sufficient and whether all possible accident scenarios have been considered.

Franz Wotawa, Researcher, Institute of Software Technology, Graz University of Technology

But crucial test scenarios with a threat to life and limb cannot be replicated as far as real test drives are concerned. Hence, autonomous driving systems should be tested for their safety in simulations.

According to Mihai Nica from AVL, “In order to test highly autonomous system, it is required to re-think how the automotive industry must validate and certify Advanced Driver Assistance Systems and Autonomous Driving systems.”

Therefore, AVL participates with TU Graz to develop a unique and highly efficient method and workflow based on simulation and test case generation for prove fulfillment of Safety of The Intended Functionality (SOTIF), quality and system integrity requirements of the autonomous systems.

Mihai Nica, AVL

The team has collaborated and worked on novel techniques, using which many more test scenarios can be simulated compared to the past. The method employed by the team is as follows: rather than driving millions of kilometers, ontologies are used to outline the surroundings of autonomous vehicles.

Ontologies are knowledge bases that help exchange appropriate information within a machine system. For instance, relationships, interfaces, and behavior of individual system units can communicate with each other.

When it comes to autonomous driving systems, these would be “autopilot,” “traffic description,” or “decision making.” The Graz team analyzed basic in-depth information about surroundings in driving scenarios and provided the knowledge bases with data regarding the construction of roads, intersections, etc., which was offered by AVL.

Driving scenarios can be extracted using this data with the help of AVL’s pioneering test case generation algorithm, which tests the automated driving system behavior in simulations.

Additional Weaknesses Uncovered

For the EU AutoDrive project, the team employed two algorithms to transform such ontologies into input models for combinatorial testing, which can then be implemented using simulation environments.

According to Wotawa, “In initial experimental tests we have discovered serious weaknesses in automated driving functions. Without these automatically generated test scenarios, the vulnerabilities would not have been detected so quickly: nine out of 319 test cases investigated have led to accidents.”

For instance, in one test scenario, a brake assistance system could not identify two people approaching from different directions simultaneously and one of them was severely hit by the initiated braking maneuver.

This means that with our method, you can find test scenarios that are difficult to test in reality and that you might not even be able to focus on,” added Wotawa.

Adaptive Compensation of Internal Errors

Autonomous systems, specifically autonomous driving systems, should be in a position to rectify themselves in the case of failure or varied environmental conditions and always dependably arrive at given target states.

When we look at semi-automated systems already in use today, such as cruise control, it quickly becomes clear that in the case of errors, the driver can and will always intervene. With fully autonomous vehicles, this is no longer an option, so the system itself must be able to act accordingly.

Franz Wotawa, Researcher, Institute of Software Technology, TU Graz University of Technology

In the latest publication for the Software Quality Journal, Franz Wotawa and Martin Zimmermann, his PhD student, present a control technique that can compensate for internal errors in the software system in an adaptive manner.

The presented technique chooses substitutive actions in a manner that predefined target states can be achieved, while offering a specific degree of redundancy. Action selection is performed using weighting models that are tuned over time and quantify the success rate of particular actions that have already been carried out.

Besides this technique, the researchers also present a Java implementation and its assessment through two case studies inspired by the needs of the autonomous driving range.

Journal Reference

Zimmermann, M & Wotawa, F (2020) An adaptive system for autonomous driving. Software Quality Control. doi.org/10.1007/s11219-020-09519-w.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.