Posted in | News | Machine-Vision

Study Explores Humans’ Perception of AI-Generated Profiles in Online Marketplace

In Airbnb and other online marketplaces, host profiles can mean the difference between a vacant one and a booked room.

Too many exclamation points, too long, and too peppy? Language is very important in a user’s search for authenticity and trust, which are significant factors in any kind of online exchange.

Taking these crucial factors into consideration, is it viable for Airbnb hosts to depend on an algorithm to pen their profiles for them?

That depends, says a recent study performed at Stanford University and Cornell University. Users will trust them when everyone utilizes algorithmically generated profiles, but if a few hosts opt to delegate writing responsibilities to artificial (AI) intelligence, then probably they will not be trusted. Scientists dubbed this the “replicant effect”—after the movie “Blade Runner.”

Participants were looking for cues that felt mechanical versus language that felt more human and emotional,” stated Maurice Jakesch, lead author of “AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness” and a doctoral student in information science at Cornell Tech. The paper will be presented at the ACM Conference on Human Factors in Computing Systems, in Glasgow, Scotland, from May 4th to 9th, 2019.

They had their own theories of what an AI-generated profile would look like. If there were serious spelling mistakes in a profile, they would say it’s more human, whereas if someone’s profile looked disjointed or senseless, they assumed it was AI.

Maurice Jakesch, Study Lead Author and Doctoral Student, Department of Information Science, Cornell Tech

Artificial Intelligence (AI) stands to redefine natural language technologies and how humans communicate with one another. People have already experienced certain AI-mediated communication—Gmail checks the content of the incoming emails and creates proposed, one-click “smart replies,” and apart from this, a new generation of writing aids corrects people’s spelling errors and also refines their writing style.

We’re beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it’s a question of: ‘Do people want that? We might run into a situation where AI-mediated communication is so widespread that it becomes part of how people evaluate what they see online.

Maurice Jakesch, Study Lead Author and Doctoral Student, Department of Information Science, Cornell Tech

In the latest analysis, the investigators looked at whether users have confidence in generated or algorithmically optimized representations, especially in online marketplaces. The research team performed three experiments, enlisting a countless number of participants on Amazon Mechanical Turk to assess real Airbnb profiles generated by humans. In certain cases, participants were led to assume that all or some of the profiles were created via an automated AI system. They were subsequently asked to give a trustworthiness score to each profile.

However, when participants were informed that they were visualizing either all AI-generated or all human-generated profiles, they did not appear to trust one more than the other. They more or less gave the same ratings to the AI-generated and human-generated profiles.

However, that perception changed when participants were told that they were visualizing a combined set of profiles. When users were left to decide whether the profiles they were viewing were written by an algorithm or a human, they did not trust the ones they assumed to be generated by a machine.

The more participants believed a profile was AI-generated, the less they tended to trust the host, even though the profiles they rated were written by the actual hosts,” wrote the authors.

Now, what is so bad about utilizing AI-mediated communication? One study participant pointed out that profiles generated by an AI machine “can be handy but also a bit lazy. Which makes me question what else they’ll be lazy about.”

The researchers said that as AI becomes turns out to be more robust and commonplace, foundational guidelines, practice, and ethics become very important. The team’s findings indicate that there are many methods to develop AI communication tools that enhance trust for human users. For a start, companies can add an emblem on all the content created by an AI machine, since certain media outlets already utilize on content created by an algorithm, pointed out Jakesch.

Jakesch added that norms as well as design and policy guidelines for utilizing AI-mediated communication are worth investigating now.

The value of these technologies will depend on how they are designed, how they are communicated to people and whether we’re able to establish a healthy ecosystem around them. It comes with a range of new ethical questions: Is it acceptable to have my algorithmic assistant generate a more eloquent version of me? Is it okay to for an algorithm to tell my kids goodnight on my behalf? These are questions that need to be discussed.

Maurice Jakesch, Study Lead Author and Doctoral Student, Department of Information Science, Cornell Tech

The paper was co-authored with Mor Naaman, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech; Megan French and Jeffrey T. Hancock of Stanford University; and Xiao Ma, a doctoral student in information science at Cornell Tech. Louis DiPietro is communications coordinator for the Department of Information Science.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.