Posted in | News | Machine-Vision

Review Examines How People Develop Trust in AI Technologies

Artificial intelligence (AI) can be described as a kind of technology that can communicate with the environment and mimic human intelligence. The use of such technologies has the ability to considerably redefine the way people work.

Anita Williams Woolley, Study Co-Author and Associate Professor, Organizational Behavior and Theory, Tepper School of Business, Carnegie Mellon University. Image Credit: Carnegie Mellon University.

Effective integration of AI technologies into organizations relies on workers’ degree of trust in the technology. A recent review has examined 20 years of research on the way individuals develop confidence in AI technologies.

The conclusion reached by the study’s authors is that the AI’s capabilities and the way AI is “embodied,” or represented, play a role in developing trust. The authors also suggested a framework that deals with the elements that mold users’ emotional and cognitive trust in AI technologies, which can aid organizations that utilize it.

Scientists at Bar Ilan University and Carnegie Mellon University prepared the review, which appeared in the Academy of Management Annals journal.

The trust that users develop in AI will be central to determining its role in organizations. We addressed the dynamic nature of trust by exploring how trust develops for people interacting with different representations of AI (e.g., robots, virtual agents, or embedded) as well as the features of AI that facilitate the development of trust.

Anita Williams Woolley, Study Co-Author and Associate Professor, Organizational Behavior and Theory, Tepper School of Business, Carnegie Mellon University

The scientists particularly looked at the role of transparency (the extent to which the technology’s operating rules and logic are evident to users), tangibility (the ability to be touched or perceived), and reliability (whether the same predicted behavior is displayed by the technology over time).

The researchers also factored immediacy behaviors (socially relevant gestures meant to boost interpersonal closeness, like active responsiveness and listening) and task characteristics (the way interpersonal versus technical judgments are managed). The team even studied anthropomorphism (the perception that human qualities can be exhibited by technology).

The study’s authors searched Google Scholar for articles relating to human trust in AI technologies which were published between the 1999–2019 period, detecting approximately 200 conference proceedings and peer-reviewed articles. The represented fields included human-computer interactions, organizational behavior, engineering information technology, information systems, and robot-human interactions.

The researchers also utilized three databases to detect an extra 50 articles. Finally, they assessed about 150 articles that demonstrated empirical research relating to human trust in AI technologies.

The study’s authors discovered that AI representation had a crucial role to play in the nature of cognitive trust developed by humans. With regard to robotic AI, the trajectory for fostering trust was similar to that of developing trust in human relationships, beginning slowly and then increasing after additional experience. However, the reverse occurred in the case of embedded and virtual AI—high initial trust decreased after experience.

In addition, the study’s authors discovered that the degree of machine intelligence defining AI technologies could mediate the development of cognitive trust, with a high intelligence level that results in greater trust after subsequent use and experience.

With regard to robotic AI, high machine intelligence often resulted in the more rapid development of a high level of trust. With respect to embedded and virtual AI, a high level of machine intelligence provided the chance of sustaining the initial high levels of trust.

Another significant factor for establishing cognitive trust in embedded and virtual AI technologies is transparency, although the association between the development of trust and reliability in AI was rather complicated.

For the development of emotional trust, anthropomorphism was exclusively significant but its impact varied based on the form of AI. Anthropomorphism had a positive impact for virtual AI, but impacts were mixed in the case of robotic AI—that is, compared to mechanical-looking robots, anthropomorphic robots were favored by people but these human-like robots may also elicit a sense of eeriness and discomfort.

Factors that impacted emotional trust varied from those that impacted cognitive trust, and a few factors may have had different inferences for each, concluded the study’s authors.

As a guide to incorporating AI technologies into organizations’ work, the study’s authors recommended a framework. They considered factors like the degree of machine intelligence, the form in which AI was utilized, behaviors like responsiveness, and also reliability as factors that impacted the way individuals developed trust in AI, both at the emotional and cognitive level.

Trust can predict the level of reliance on technology, while the level of correspondence between someone’s trust and the capabilities of the technology, known as calibration, can influence how the technology is used.

Ella Glikson, Study Co-Author and Assistant Professor, Graduate School of Business Administration, Bar Ilan University

DARPA has financially supported the study.

Source: https://www.cmu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.