Human-Like Awareness Emerging in Self-Driving Vehicles

How mobile robots sense and comprehend their surroundings accurately even when other objects block off certain areas is a critical issue that needs to be resolved for self-driving vehicles to operate safely in busy cities.

Human-like Awareness Emerging in Self-Driving Vehicles
In contrast to panoptic segmentation (middle), amodal panoptic segmentation (bottom) predicts entire object instances including their occluded regions, e.g., cars and people, of the input image (top). Image Credit: Berkeley DeepDrive; Abhinav Valada; Abhinav Valada

While current artificial intelligence (AI) algorithms that allow robots and self-driving vehicles to sense their environment can envisage complete physical structures of objects even when they are partially obscured, they cannot do the same for people.

Once they have learned how their environment appears, AI-powered robots can navigate and find their way about alone. However, it has proven challenging for them to perceive objects’ full structures when they are partially covered, such as individuals in crowds or vehicles in traffic.

Prof. Dr. Abhinav Valada and Ph.D. candidate Rohit Mohan from the Robot Learning Lab at the University of Freiburg have made a significant advancement in finding a solution to this issue, which they have documented in two joint articles.

A Task Whose Solution Promises More Safety

The amodal panoptic segmentation task was created by the two Freiburg researchers, who then used advanced AI techniques to show that it was feasible. Panoptic segmentation has been employed by self-driving vehicles up until this point to comprehend their surroundings.

Self-driving vehicles can currently only recognize instances of certain items and anticipate which pixels in an image belong to certain “visible” areas of those objects, such as a person or a car.

These vehicles cannot estimate an object’s whole shape while another adjacent object partially obscures it. However, this comprehensive perception of the environment is made feasible by the new perception task with amodal panoptic segmentation.

Amodal refers to the situation where any partial concealment of objects must be abstracted; viewing them as a whole rather than as fragments should be a general notion. This enhanced visual identification capability will significantly advance self-driving vehicles' safety.

Potential to Revolutionize Urban Visual Scene Understanding

The researchers added the new assignment to well-known benchmark datasets. They made them publicly available in a recent article presented at the IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR).

Scientists are now being invited to engage in benchmarking with their own AI algorithms. The objective of this task is the pixel-wise semantic segmentation of the visible regions of countable background classes such as cars, trucks, and pedestrians, as well as the instance segmentation of the visible and occluded object regions of amorphous background classes, including roads, vegetation, and sky.

Two suggested new learning algorithms are among the benchmark and datasets that are publicly accessible on the website.

We are confident that novel AI algorithms for this task will enable robots to emulate the visual experience that humans have by perceiving complete physical structures of objects.

Prof. Dr. Abhinav Valada, Robot Learning Lab, Faculty of Engineering, University of Freiburg

Dr. Valada added, “Amodal panoptic segmentation will significantly help downstream automated driving tasks where occlusion is a major challenge such as depth estimation, optical flow, object tracking, pose estimation, motion prediction, etc.

With more advanced AI algorithms for this task, visual recognition ability for self-driving cars can be revolutionized. For example, if the entire structure of road users is perceived at all times, regardless of partial occlusions, the risk of accidents can be significantly minimized.

Automated vehicles can also make complicated decisions, such as which direction to go toward the item to acquire a more precise view, by presuming the relative depth ordering of objects in a scene. At AutoSens, held at the Autoworld Museum in Brussels, top automotive industry executives were presented with the challenge and the advantages of making these goals a reality.

Journal References:

Mohan, R., et al. (2022) Perceiving the Invisible: Proposal-Free Amodal Panoptic Segmentation. IEEE Robotics and Automation Letters. doi:10.1109/LRA.2022.3189425

Mohan, R., et al. (2022) Amodal Panoptic Segmentation. IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR).

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.