The use of artificial intelligence (AI), technologies that can interact with the environment and simulate human intelligence, has the potential to significantly change the way we work. Successfully integrating AI into organizations depends on workers' level of trust in the technology. A new review examined two decades of research on how people develop trust in AI.
The authors concluded that the way AI is represented, or "embodied," and AI's capabilities contribute to developing trust. They also proposed a framework that addresses the elements that shape users' cognitive and emotional trust in AI, which can help organizations that use it.
The review, by researchers at Carnegie Mellon University and Bar Ilan University, appears in Academy of Management Annals.
"The trust that users develop in AI will be central to determining its role in organizations," explains Anita Williams Woolley, Associate Professor of Organizational Behavior and Theory at Carnegie Mellon University's Tepper School of Business, who coauthored the study.
We addressed the dynamic nature of trust by exploring how trust develops for people interacting with different representations of AI (e.g., robots, virtual agents, or embedded) as well as the features of AI that facilitate the development of trust."
Anita Williams Woolley, Carnegie Mellon University's Tepper School of Business
Specifically, the researchers observed the role of tangibility (the capability of being perceived or touched), transparency (the level to which the operating rules and logic of the technology are apparent to users), and reliability (whether the technology exhibits the same expected behavior over time).
They also considered task characteristics (how technical versus interpersonal judgments are handled) and immediacy behaviors (socially oriented gestures intended to increase interpersonal closeness, such as active listening and responsiveness). They also looked at anthropomorphism (the perception that technology can have human qualities).
The authors searched Google Scholar for articles on human trust in AI published between 1999 and 2019, identifying about 200 peer-reviewed articles and conference proceedings.
Fields represented included organizational behavior, human-computer interactions, robot-human interactions, information systems, information technology, and engineering. They also used three databases to identify an additional 50 articles. In the end, they reviewed approximately 150 articles that presented empirical research on human trust in AI.
The authors found that the representation of AI played an important role in the nature of the cognitive trust people develop. For robotic AI, the trajectory for developing trust resembled that of creating trust in human relationships, starting low and increasing after more experience. But for virtual and embedded AI, the opposite occurred: High initial trust declined following experience.
The authors also found that the level of machine intelligence characterizing AI may moderate the development of cognitive trust, with a high level of intelligence leading to higher trust following use and experience. For robotic AI, a high level of machine intelligence generally led to faster development of a high level of trust.
For virtual and embedded AI, high machine intelligence offered the possibility of maintaining the initial high levels of trust. Transparency was also an important factor for establishing cognitive trust in virtual and embedded AI, though the relationship between reliability and the development of trust in AI was complex.
Anthropomorphism was uniquely important for the development of emotional trust, but its effect differed depending on the form of AI. For virtual AI, anthropomorphism had a positive effect. For robotic AI, effects were mixed: People tended to like anthropomorphic robots more than mechanical-looking robots, but these human-like robots could also evoke discomfort and a sense of eeriness.
Factors that influenced emotional trust differed from those that influenced cognitive trust, and some factors may have had different implications for each, the authors concluded.
As a guide to integrating AI into organizations' work, the authors proposed a framework. They considered the form in which AI was used, the level of machine intelligence, behaviors such as responsiveness, and reliability as factors that influenced how people developed trust in AI, both cognitively and emotionally.
Trust can predict the level of reliance on technology, while the level of correspondence between someone's trust and the capabilities of the technology, known as calibration, can influence how the technology is used."
Ella Glikson, co-author, Graduate School of Business Administration at Bar Ilan University
The research was funded by DARPA.