Functional Near-Infrared Spectroscopy Used to Track Human-Robot Work Interactions

With industries beginning to visualize humans functioning in close proximity to robots, there is a necessity to make certain that the relationship is smooth, effective, and advantageous to humans. Robot dependability and humans’ readiness to trust robot actions are crucial to this working liaison.

Researchers in Dr Ranjana Mehta’s lab capture functional brain activity as operators work with robots on a manufacturing task to track the operator’s trust or distrust levels. Image Credit: Texas A&M Engineering.

However, understanding human trust levels can be challenging because of subjectivity, a challenge scientists in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to decipher.

Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust study grew from a succession of projects on human-robot interactions in safety-critical work domains backed by the National Science Foundation (NSF).

While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study. We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.

Dr Ranjana Mehta, Associate Professor and Director, NeuroErgonomics Lab, Texas A&M University

Mehta’s most recent NSF-sponsored work, published in Human Factors: The Journal of the Human Factors and Ergonomics Society, concentrates on comprehending the brain-behavior relationships of how and why an operator’s trusting behaviors are affected by both robot and human factors.

In another article published in the journal Applied Ergonomics. Mehta and the team have investigated these robot and human factors.

Mehta’s lab used functional near-infrared spectroscopy to capture functional brain activity as operators worked together with robots on a manufacturing activity. They discovered defective robot actions reduced the operator’s trust in the robots. That distrust was related to increased stimulation of regions in the frontal, motor, and visual cortices, signifying increasing workload and amplified situational awareness.

Fascinatingly, the same distrusting behavior was related to the decoupling of these brain regions working in unison, which otherwise were well linked when the robot acted reliably. Mehta said this decoupling was better at higher robot autonomy levels, signifying that neural signatures of trust are impacted by the dynamics of human-autonomy collaboration.

What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot.

Dr Ranjana Mehta, Associate Professor and Director, NeuroErgonomics Lab, Texas A&M University

“This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up,” Dr. Mehta added.

Dr. Sarah Hopko ’19, the study lead author on both the articles and a recent industrial engineering doctoral student, stated that neural responses and views of trust indicate trusting and distrusting behaviors and transmit distinct information on how trust develops, breaks, and repairs with various robot behaviors.

She highlighted the positives of multimodal trust metrics—eye tracking, neural activity, behavioral analysis, etc.—that can disclose new perceptions that subjective responses alone cannot provide.

The following step is to develop the research into a diverse work context, such as emergency response, and comprehend how trust in multi-human robot teams affects teamwork and taskwork in safety-critical settings.

Mehta explained the long-term goal is not to substitute humans with autonomous robots but to assist them by creating trust-aware autonomy agents.

This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation, and integration into the workplace are supportive and empowering of human capabilities.

Dr Ranjana Mehta, Associate Professor and Director, NeuroErgonomics Lab, Texas A&M University

Journal References:

Hopko, S. K., et al. (2022) Trust in Shared-Space Collaborative Robots: Shedding Light on the Human Brain. Human Factors: The Journal of the Human Factors and Ergonomics Society. doi.org/10.1177/00187208221109039.

Hopko, S. K., et al. (2022) Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors. Applied Ergonomics. doi.org/10.1016/j.apergo.2022.103863.

Source: https://engineering.tamu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.