Researchers Train Self-Driving Cars to Recognize and Predict Pedestrian Movement

By pinpointing humans’ gait, foot placement, and body symmetry, scientists at the University of Michigan (U-M) are training self-driving cars to identify and predict pedestrian movements with more precision than existing technologies.

Predicting pedestrian movement in 3D for driverless cars

Data gathered by vehicles through LiDAR, cameras, and GPS allow the scientists to capture video clips of humans in motion and then reconstruct them in 3D computer simulation. Using that, they have developed a “biomechanically inspired recurrent neural network” that records human movements.

With that information, they can predict poses and future positions for one or several pedestrians up to approximately 50 yards from the vehicle. That is at around the scale of a city intersection.

Prior work in this area has typically only looked at still images. It wasn’t really concerned with how people move in three dimensions. But if these vehicles are going to operate and interact in the real world, we need to make sure our predictions of where a pedestrian is going doesn’t coincide with where the vehicle is going next.

Ram Vasudevan, Assistant Professor of Mechanical Engineering, U-M.

Fitting vehicles with the essential predictive power requires the network to be well aware of the intricacies of human movement: the mirror symmetry of limbs, the stride of a human’s gait (periodicity), and the way in which foot placement impacts stability while walking.

A lot of the machine learning used to advance autonomous technology to its present level has dealt with 2D images—still photos. A computer that is shown numerous photos of a stop sign will ultimately learn to recognize stop signals in the real world and in real time.

But by using video clips that play for more than a few seconds, the U-M system can examine the first half of the clip to make its predictions, and then confirm the accuracy with the second half.

Now, we’re training the system to recognize motion and making predictions of not just one single thing—whether it’s a stop sign or not—but where that pedestrian’s body will be at the next step and the next and the next.

Matthew Johnson-Roberson, Associate Professor, Department of Naval Architecture and Marine Engineering, U-M.

To elucidate the kind of extrapolations the neural network can make, Vasudevan illustrates a common spectacle.

If a pedestrian is playing with their phone, you know they’re distracted. Their pose and where they’re looking is telling you a lot about their level of attentiveness. It’s also telling you a lot about what they’re capable of doing next.

Ram Vasudevan, Assistant Professor of Mechanical Engineering, U-M.

The results have indicated that this new system builds upon a driverless vehicle’s capacity to identify what is highly likely to take place next.

The median translation error of our prediction was approximately 10 cm after one second and less than 80 cm after six seconds. All other comparison methods were up to 7 meters off. We’re better at figuring out where a person is going to be.

Matthew Johnson-Roberson, Associate Professor, Department of Naval Architecture and Marine Engineering, U-M.

To cut back the number of options for predicting the next movement, the scientists applied the physical limits of the human body—man’s fastest possible speed on foot or inability to fly.

To develop the dataset used to train U-M’s neural network, scientists parked a vehicle with Level 4 autonomous features at a number of Ann Arbor intersections. With the car’s cameras and LiDAR focusing on the intersection, the vehicle could record several days of data at a time.

Scientists bolstered that daily, “in the wild” data from traditional pose data sets captured in a lab. The outcome is a system that will raise the bar for what driverless vehicles can achieve.

We are open to diverse applications and exciting interdisciplinary collaboration opportunities, and we hope to create and contribute to a safer, healthier, and more efficient living environment.

Xiaoxiao Du, Research Engineer, U-M.

A paper on the research is published under early access online in IEEE Robotics and Automation Letters. It will soon be available in an upcoming print edition. The research was aided by a grant from Ford Motor Company.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.