Posted in | News | Automotive Robotics

Highly Realistic Simulator Technology for Self-Driving Vehicle Safety Prior to Road Testing

Computer scientist Dinesh Manocha from the University of Maryland, along with colleagues from Baidu Research and the University of Hong Kong, has created a photo-realistic simulation system for training and validating self-driving vehicles. The new system offers a fuller, more authentic simulation than existing systems that use game engines or high-fidelity computer graphics and mathematically extracted traffic patterns.

The Augmented Autonomous Driving Simulation (AADS) system combines photos, videos, and LIDAR point clouds for realistic scene rendering with real-world trajectory data that can be used to predict the driving behavior and future positions of other vehicles or pedestrians on the road. (Image credit - Credit: Li et. Al, 2019)

Their system, referred to as Augmented Autonomous Driving Simulation (AADS), could make self-driving technology simpler to assess in the lab while also guaranteeing more reliable safety before costly road testing commences.

The researchers explained their methodology in a research paper published in the journal Science Robotics on March 27th, 2019.

"This work represents a new simulation paradigm in which we can test the reliability and safety of automatic driving technology before we deploy it on real cars and test it on the highways or city roads," said Manocha, one of the paper's corresponding authors, and a professor with joint appointments in computer science, electrical and computer engineering, and the University of Maryland Institute for Advanced Computer Studies.

One possible advantage of self-driving cars is that they could be safer than human drivers who are susceptible to fatigue, distraction, and emotional decisions that result in errors. But to ensure safety, autonomous vehicles must assess and react to the driving environment without fail. Given the countless circumstances that a car might bump into on the road, an autonomous driving system requires hundreds of millions of miles worth of test drives under tough conditions to establish reliability.

While that could take several years to achieve on the road, initial evaluations could be carried out quickly, efficiently, and more safely using computer simulations that accurately signify the real world and model the behavior of nearby objects. Current sophisticated simulation systems illustrated in scientific literature fail in depicting photo-realistic environments and presenting real-world driver behaviors or traffic flow patterns.

AADS is a data-driven system that more exactly signifies the inputs a self-driving car would get on the road. Self-driving cars depend on a perception module, which receives and infers information about the real world, and a navigation module that takes decisions, such as where to navigate or whether to accelerate or break, based on the perception module.

In reality, a self-driving car’s perception module usually receives input from cameras and LIDAR sensors, which use pulses of light to compute distances of the surrounding. In existing simulator technology, the perception module obtains input from computer-generated imagery and mathematically modeled movement patterns for bicycles, pedestrians, and other cars. It is a comparatively basic representation of the real world. It is also costly and time-consuming to develop because computer-generated imagery models must be hand produced.

The AADS system integrates videos, photos, and LIDAR point clouds—which are similar to 3D shape renderings—with real-world trajectory data for bicycles, pedestrians, and other cars. These trajectories can be used to forecast the driving behavior and future positions of pedestrians or other vehicles on the road for safer navigation.

We are rendering and simulating the real world visually, using videos and photos," said Manocha, "but also we're capturing real behavior and patterns of movement. The way humans drive is not easy to capture by mathematical models and laws of physics. So, we extracted data about real trajectories from all the video we had available, and we modeled driving behaviors using social science methodologies. This data-driven approach has given us a much more realistic and beneficial traffic simulator.

Dinesh Manocha, Paper’s Co-Author and Professor of Computer Science, Electrical and Computer Engineering, Institute for Advanced Computer Studies, University of Maryland.

The researchers had an age-old challenge to overcome in utilizing real video imagery and LIDAR data for their simulation: Every scene must react to a self-driving car's movements, even though those movements may not have been recorded by the LIDAR sensor or original camera. Whatever viewpoint or angle is not captured by a video or photo has to be simulated or rendered using prediction techniques. This is why simulation technology has always depended so profoundly on physics-based prediction methods and computer-generated graphics.

To overcome this difficulty, the scientists have put together a technology that isolates the many components of a real-world street scene and renders them as separate elements that can be resynthesized to form a host of photo-realistic driving situations.

With AADS, pedestrians and vehicles can be lifted from one scenario and placed into another with the appropriate lighting and movement patterns. Roads can be recreated with varying levels of traffic. Numerous viewing angles of every scene offer more accurate viewpoints during lane changes and turns. Furthermore, advanced image processing technology allows smooth transitions and decreases misrepresentation compared with other video simulation methods. The image processing methods are also used to extract trajectories, and thus model driver behaviors.

Because we're using real-world video and real-world movements, our perception module has more accurate information than previous methods. And then, because of the realism of the simulator, we can better evaluate navigation strategies of an autonomous driving system.

Dinesh Manocha, Paper’s Co-Author and Professor of Computer Science, Electrical and Computer Engineering, Institute for Advanced Computer Studies, University of Maryland.

Manocha said that by publishing this research, the researchers hope corporations developing self-driving vehicles might add the same data-driven approach to enhance their own simulators for testing and assessing autonomous driving systems.

Augmented Autonomous Driving Simulation (AADS)

VIDEO: Demonstration of the AADS system

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.