Study Finds that Chatbots Having Human-Like Features Lack Interactivity and Disappointed Users

Siri, a chatbot, may be given a human name or its avatar may be added with human-like features. However, researchers say that this might not be adequate to win over a user if the device does not maintain a conversational reciprocation with the user.

People are more likely to enjoy a conversation with an interactive chatbot that doesn’t look human than a human-like chatbot that isn’t interactive, say researchers. (Image credit: PxHere: Mahammed Hassan)

Indeed, such human-like features could lead to a backlash against human-like chatbots that are less responsive.

As part of a study, the scientists discovered that chatbots having human features—for instance, a human avatar—but without interactivity let down people who used it. However, users better responded to a less-interactive chatbot without any human-like features, stated S. Shyam Sundar, James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory and an affiliate of Penn State’s Institute for CyberScience (ICS).

Sundar stated that swift responses matching the queries of a user and featuring a threaded exchange that can be easily followed signify high interactivity.

People are pleasantly surprised when a chatbot with low anthropomorphism—fewer human cues—has higher interactivity. But when there are high anthropomorphic visual cues, it may set up your expectations for high interactivity—and when the chatbot doesn’t deliver that—it may leave you disappointed.

S. Shyam Sundar, James P. Jimirro Professor of Media Effects / Co-Director, Media Effects Research Laboratory, Penn State University

By contrast, optimization of interactivity might be quite adequate to compensate for a less-human-like chatbot. Even slight variations in the dialogue, such as acknowledging what the user uttered before giving a response, can render the chatbot more interactive, stated Sundar.

In the case of the low-humanlike chatbot, if you give the user high interactivity, it’s much more appreciated because it provides a sense of dialogue and social presence,” stated study lead author Eun Go, a former doctoral student at Penn State and currently assistant professor in broadcasting and journalism, Western Illinois University.

Since it is expected that people may be skeptical about interacting with a machine, in general, developers add human names to their chatbots—for instance, Apple’s Siri—or program a human-like avatar to emerge when the chatbot responds to a user.

The researchers reported their study outcomes in Computers in Human Behavior, which is currently online, and also discovered that just giving hints at whether a machine or a human is involved—or offering an identity prompt—guides the way people comprehend the interaction.

Identity cues build expectations. When we say that it’s going to be a human or chatbot, people immediately start expecting certain things.

Eun Go, Assistant Professor, Broadcasting and Journalism, Western Illinois University

According to Sundar, the outcomes could assist developers in enhancing the acceptance of chat technology among users. He further stated that chat agents and virtual assistants are being more and more used in the home and by businesses since they are convenient for people.

There’s a big push in the industry for chatbots. They’re low-cost and easy-to-use, which makes the technology attractive to companies for use in customer service, online tutoring and even cognitive therapy—but we also know that chatbots have limitations. For example, their conversation styles are often stilted and impersonal.

S. Shyam Sundar, James P. Jimirro Professor of Media Effects / Co-Director, Media Effects Research Laboratory, Penn State University

Furthermore, Sundar stated that the research also augments the significance of high interactivity, in a broad sense.

We see this again and again that, in general, high interactivity can compensate for the impersonal nature of low anthropomorphic visual cues,” stated Sundar. “The bottom line is that people who design these things have to be very strategic about managing user expectations.”

In total, 141 participants were recruited by the scientists through Amazon Mechanical Turk, a crowdsourced site that enables people to get paid to participate in studies. The participants signed up for a particular time slot and analyzed a scenario. They were informed that they were shopping for a digital camera as a birthday present for a friend. Subsequently, the participants navigated to an online camera store and were instructed to interact with the live chat feature.

Eight different conditions were designed by the researchers by manipulating three factors to test the reaction of the user to the chatbot. The first factor is a chatbot’s identity. When the participant took part in the live chat, a message emerged signifying the user was interacting with a person or a chatbot.

The second factor is a chatbot’s visual representation. The chatbot had a human-like avatar in one condition, whereas in another, it just included a speech bubble. Finally, the chatbots included high or low interactivity while responding to participants, where the only difference was that a portion of the response of a user was repeated in the high condition. In every case, it was a human who interacted with the participant.

This research was performed online, but the scientists say that observing the way in which people interact with chatbots in a lab could be one probable step to further this study.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.