Posted in | News | Medical Robotics

New System Allows AI in Robotic Prostheses to Predict Terrain Type

New software designed by scientists can now be combined with existing hardware to allow people who use robotic exoskeletons or prosthetics to walk in a secure and more natural way on terrains of various types.

Imaging devices and environmental context. (a) On-glasses camera configuration using a Tobii Pro Glasses 2 eye tracker. (b) Lower limb data acquisition device with a camera and an IMU chip. (c) and (d) Example frames from the cameras for the two data acquisition configurations. (e) and (f) Example images of the data collection environment and terrains considered in the experiments. Image Credit: North Carolina State University.

The new system integrates computer vision into prosthetic leg control, and involves powerful artificial intelligence (AI) algorithms that enable the software to better account for uncertainty.

Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on. The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.

Edgar Lobaton, Study Co-Author and Associate Professor of Electrical and Computer Engineering, North Carolina State University

The focus of the team was to differentiate between six different terrains that need adjustments in a robotic prosthetic’s behavior: grass, concrete, brick, tile, “upstairs,” and “downstairs.”

If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision—it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode.

Boxuan Zhong, Study Lead Author and PhD Graduate, North Carolina State University

The latest “environmental context” framework integrates both software and hardware elements. The team developed the framework for use with a robotic prosthetic device or lower-limb robotic exoskeleton of any type, but with one extra piece of hardware—a camera.

As part of the study, the team used cameras fitted on the lower-limb prosthesis and cameras worn on eyeglasses itself. They assessed the way AI was able to utilize computer vision data from two types of cameras, individually and when used in combination.

Incorporating computer vision into control software for wearable robotics is an exciting new area of research. We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive.

Helen Huang, Study Co-Author, North Carolina State University

Huang added, “However, we also found that using only the camera mounted on the lower limb worked pretty well—particularly for near-term predictions, such as what the terrain would be like for the next step or two.”

Also, Huang is the Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill.

However, the most important advance is the AI itself.

Lobaton noted, “We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making. This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”

For the AI system to be trained, the team linked the cameras to able-bodied individuals, who walked through a range of indoor and outdoor settings. Then, the researchers did a proof-of-principle assessment by making a person with lower-limb amputation wear the cameras while walking through the same settings.

Lobaton added, “We found that the model can be appropriately transferred so the system can operate with subjects from different populations. That means that the AI worked well even thought it was trained by one group of people and used by somebody different.”

But the new framework has still not been tested in a robotic device.

According to Huang, “We are excited to incorporate the framework into the control system for working robotic prosthetics—that’s the next step.”

And we’re also planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing,” stated Zhong.

The paper titled “Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification” has been published in IEEE Transactions on Automation Science and Engineering. Rafael da Silva, a PhD student at NC State, and Minhan Li, a PhD student in the Joint Department of Biomedical Engineering, co-authored the paper.

This study received financial support from the National Science Foundation under grants 1552828, 1563454, and 1926998.

Journal Reference:

Zhong, B., et al. (2020) Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification. IEEE Transactions on Automation Science and Engineering. doi.org/10.1109/TASE.2020.2993399.

Source: https://www.ncsu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.