Posted in | News | Automotive Robotics

New Method Could Improve Detection Accuracy of Autonomous Cars

Quick detection of pedestrians or other cars sharing the road is very crucial for self-driving cars. At Carnegie Mellon University, scientists have demonstrated that it is possible to considerably enhance the accuracy of detection by assisting the vehicle to also identify what it does not observe—i.e., empty space.

New CMU research shows that what a self-driving car doesn’t see (in green) is as important to navigation as what it actually sees (in red). Image Credit: Carnegie Mellon University.

Individuals know very well that objects in their sight might conceal their view of things lying further ahead. However, according to Peiyun Hu, a PhD student at CMU’s Robotics Institute, self-driving cars do not usually reason objects close to them in this way.

Instead, they make use of 3D data from lidar to depict objects as a point cloud and subsequently make attempts to match those point clouds to an archive of 3D representations of objects. According to Hu, the issue here is that the 3D data from the vehicle’s lidar is not actually 3D—the sensor does not have the ability to observe the obscured parts of an object, and the existing algorithms do not reason about those occlusions.

Perception systems need to know their unknowns.

Peiyun Hu, PhD Student, Robotics Institute, Carnegie Mellon University

Hu’s study allows the perception system of a self-driving car to account for visibility as it reasons about the visuals observed by its sensors. Reasoning about visibility is, indeed, used already by companies to build digital maps.

Map-building fundamentally reasons about what’s empty space and what’s occupied. But that doesn’t always occur for live, on-the-fly processing of obstacles moving at traffic speeds.

Deva Ramanan, Associate Professor of Robotics, Director of Argo AI Center for Autonomous Vehicle Research, Carnegie Mellon University

In the study, which will be presented later at the Computer Vision and Pattern Recognition (CVPR) conference to be held online between June 13th and 19th, Hu together with his collaborators utilizes map-making methods to enable the system to reason about visibility while trying to identify objects.

When the method was tested against an established benchmark, the CMU technique outclassed the earlier top-performing method, enhancing detection by 5.3% for pedestrians, 7.4% for trucks, 10.7% for cars, 16.7% for trailers, and 18.4% for buses.

The earlier system may not have taken visibility into consideration due to the high computation time it requires. However, Hu stated that his team did not find that to be a problem: their technique requires just 24 ms to run. (For comparison, every sweep of the lidar is 100 ms).

Besides Hu and Ramanan, the team included Jason Ziglar from Argo AI and David Held, assistant professor of robotics. This study was financially supported by the Argo AI Center.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.