Posted in | News | Military Robotics

Enhancing Agent Transparency to Improve Collaboration Between Human and Artificially Intelligent Agent

Researchers at the U.S. Army Research Laboratory have come up with advanced processes to enhance collaboration between humans and agents of artificial intelligence in two research works recently concluded for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense.

The researchers achieved this by improving the agent transparency, that is, the potential of an unmanned vehicle, a robot, or a software agent to communicate to humans its performance, intent, reasoning process and future plans.

Photo credit: Dr. Jessie Y. Chen.

As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust in the systems and make appropriate decisions,” elucidated Dr. Jessie Chen, senior research psychologist from ARL.

In a report published in 2016, the U.S. Defense Science Board recognized six barriers to the trust of humans in autonomous systems, with “low observability, predictability, directability and auditability” and also “low mutual understanding of common goals” being among the main issues.

To solve these problems, Chen and her team created the Situation awareness-based Agent Transparency (SAT) model and evaluated its effectiveness on human-agent team performance in a sequence of human factors researches which the ARPI supported. The SAT model is concerned with the information demands from an agent to its human collaborator to assist the human to acquire the agent’s effective situation awareness in its tasking surroundings. In the first SAT level, the agent offers the operator the fundamental information related to its existing state and intentions, goals and plans. In the second SAT level, the agent discloses its reasoning process and also the affordances/constraints that the agent takes into account while planning its actions. In the third SAT level, the agent offers the operator information related to its prediction of future states, anticipated results, chances of failure/success, and any ambiguity related to the above-mentioned predictions.

In one of the ARPI studies, IMPACT, a research program on human-agent association for controlling numerous heterogeneous unmanned vehicles, the focus of ARL’s experimental endeavor was on investigating the impacts of agent transparency levels, depending upon the SAT model, on the decision making of human operators at the time of military scenarios. The outcomes of a sequence of human factors experiments together indicate that agent transparency is advantageous to the decision making of the human, and hence to the overall performance of the human-agent team. In particular, scientists stated that the trust of the human on the agent was notably better regulated—accepting the plan of the agent if it is correct and rejecting it otherwise—if the level of transparency of the agent is high.

The other study based on agent transparency carried out by Chen and her team under the ARPI was Autonomous Squad Member, in which ARL partnered with Naval Research Laboratory researchers. The ASM is a small ground robot interacting as well as communicating with an infantry squad. Within the framework of the overall ASM program, Chen’s team created transparency visualization ideas, adopted by them to analyze the impacts of the levels of agent transparency on the performance of an operator. Informed by the SAT model, the user interface of ASM includes an at-a-glance transparency module in which user-evaluated iconographic representations of the motivator, plans, and predicted results of the agent are used to stimulate transparent interaction with the agent. A sequence of human factors investigations on the user interface of the ASM has analyzed the impacts of agent transparency on the trust in the ASM, situation awareness, and workload of the human teammate. The outcomes, analogous with the IMPACT project’s outcomes, indicated the positive impacts of agent transparency on the task performance of the human without increase in perceived workload. The participants of the study also stated that they perceived the ASM to be more intelligent, trustworthy, and human-like when it exhibited higher levels of transparency.

Chen and her team are at present furthering the SAT model into bidirectional transparency between the agent and the human.

Bidirectional transparency, although conceptually straightforward—human and agent being mutually transparent about their reasoning process—can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance—just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” proposed Chen.

The task at hand is to design the user interfaces to include auditory, visual, and other modalities, which can dynamically support bidirectional transparency in real time, without placing burden on the human with enormous information.

The U.S. Department of Defense Autonomy Research Pilot Initiative (Program Manager Dr. DH Kim) supported this study.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.