Posted in | News | Consumer Robotics

New Study to Examine Aspects of Norm Violation Response in Human-Robot Teams

Elizabeth Phillips, Assistant Professor, Psychology, Human Factors/Applied Cognition, is conducting a study to examine two aspects of norm violation response in human-robot teams.

Specifically, she is investigating: (1) context-sensitive tradeoffs between rule-based and role-based responses, and (2) representations and mechanisms needed to facilitate role-based responses.

She is doing so by taking four steps.

  • First, she is identifying metrics to assess responses.
  • Second, she is investigating the tradeoffs between role- and rule-based responses.
  • Third, she is modeling the generation of the role-based responses and selection between role-based and rule-based responses.
  • Finally, she is using validation of these models to articulate new moral philosophical arguments.

This study is important because there is clear societal benefit for understanding the ethical implications of natural language generation (NLG) algorithms. NLG is a software process that turns structured data into natural language.

This work is also important for understanding how to design such algorithms to optimize ethical benefits.

Moreover, robots with both linguistic interaction and moral reasoning capabilities have the potential for significant societal impact in many different areas, such as eldercare and education.

This project will also have multidisciplinary research impacts on fields such as artificial intelligence, natural language processing, robotics, and machine ethics. It will be valuable to researchers in moral philosophy and moral psychology, as well.

Phillips received $171,917 from the National Science Foundation for this project. Funding began in August 2020 and will end in late September 2022.

Source: https://www2.gmu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.