Scientists Warn Against Hyper-Realistic AI Voices

As AI becomes highly realistic, the trust in those with whom we communicate might be compromised.

Examining How AI Systems Impact Human Trust During Interactions

Jonas Ivarsson and Oskar Lindwall. Image Credit: University of Gotheburg

At the University of Gothenburg, scientists have analyzed how advanced AI systems affect people’s trust in individuals when they interact.

In one scenario, a would-be scammer, believing he is calling an elderly man, is connected instead to a computer system that communicates via pre-recorded loops. The scammer spends significant time attempting the fraud, patiently listening to the “man’s” slightly confusing and repetitive stories.

Oskar Lindwall, a professor of communication at the University of Gothenburg, notes that it frequently takes a long time for people to identify they are interacting with a technical system.

He has, in partnership with Professor of Informatics Jonas Ivarsson, written an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, exploring how individuals understand situations where one of the parties may be an AI agent. The article stresses the negative consequences of harboring suspicion toward others, like the damage it can create to relationships.

Ivarsson offers an instance of a romantic relationship where problems in trust emerge, leading to envy and a high tendency to search for proof of deception. The authors dispute that being not capable of completely trusting a conversational partner’s intentions and determination might lead to surplus suspicion even when there is no reason for it.

Their study found that at the time of interactions between two humans, few behaviors were interpreted as signs that one of them was a robot.

The scientists say that a pervasive design standpoint is driving the development of AI with many human-like features.

While this might be great in a few contexts, it can also get difficult, especially when it is not clear who one is communicating with. Ivarsson questions whether AI must have such human-like voices, as they create a sense of intimacy and lead people to develop impressions depending on the voice alone.

In the event of the would-be fraudster calling the “older man,” the scam is exposed after a long time, which Lindwall and Ivarsson attribute to the credibility of the human voice and also the assumption that the confused behavior is caused as a result of the age.

As soon as an AI has a voice, attributes like age, gender, and socioeconomic background are inferred. This makes it difficult to determine whether one is interacting with a computer.

The scientists suggest making AI with well-functioning and eloquent voices that are still clearly synthetic, which increases transparency.

Communication with others involves not only deception but also joint meaning-making and relationship-building. The doubt of whether one is talking to a human or a computer impacts this aspect of communication.

While it may not matter in a few situations, like cognitive-behavioral therapy, other forms of therapy that need more human connection might be affected negatively.

Journal Reference

Ivarsson, J & Lindwall, O (2023) Suspicious Minds: the Problem of Trust and Conversational Agents. Computer Supported Cooperative Work (CSCW).


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type