Posted in | News | Machining Robotics

New Perspective Offered by AI-Powered Writing Assistants

According to a new study, artificial intelligence-powered writing assistants that autocomplete sentences or provide “smart replies” put words into people’s mouths and concepts into their heads.

Maurice Jakesch, a doctoral student in the field of information science asked over 1,500 participants to write a paragraph answering the question, “Is social media good for society?”

People who made use of an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant and considerably more likely to say they had a similar opinion, compared to people who wrote without the assistance of AI.

The researchers stated that the study denotes that the biases baked into AI writing tools—intentional or unintentional—could have worrying repercussions for culture and politics.

We’re rushing to implement these AI models in all walks of life, but we need to better understand the implications. Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society—shifts in language and opinions.

Mor Naaman, Study Co-Author and Professor, Jacobs Technion-Cornell Institute, Cornell University

Naaman is also the professor of information science at the Cornell Ann S. Bowers College of Computing and Information Science

While others have watched how huge language models like ChatGPT could make political messages and persuasive ads, this is the first study to exhibit that the process of writing with an AI-powered tool could sway the opinions of a person.

Jakesch presented the study, “Co-Writing with Opinionated Language Models Affects Users’ Views,” at the 2023 CHI Conference on Human Factors in Computing Systems in April, where the paper received a respectable mention.

To comprehend how people interact with AI writing assistants, Jakesch guided a large language model to have either positive or negative thoughts about social media. The participants who took part wrote their paragraphs—alone or with one of the opinionated assistants—on a platform built by the researchers that copies a social media website.

The platform gathers data from participants as they type, like those of the AI suggestions they accept and the time duration they take for the paragraph to be composed.

More sentences were made by people who co-wrote with the pro-social media AI assistant arguing that social media is good, and vice versa, compared to participants in the absence of a writing assistant, as identified by separate judges. Also, these participants were more likely to declare their opinion of the assistant in a follow-up survey.

The scientists explored the chances that people accepted the AI suggestions to end the task quicker. However, even participants who took several minutes to compose their paragraphs emerged with heavily impacted statements. The survey disclosed that most participants did not even observe the AI was biased and did not identify they were being affected.

The process of co-writing doesn’t really feel like I’m being persuaded. It feels like I’m doing something very natural and organic—I’m expressing my own thoughts with some aid.

Mor Naaman, Study Co-Author and Professor, Jacobs Technion-Cornell Institute, Cornell University

When the experiment is repeated with a different topic, the research group again saw that the assistants influenced participants. The team is examining how this experience has made the shift and how long the effects last.

Similar to how social media has altered the political landscape by streamlining the spread of misinformation and the formation of echo chambers, biased AI writing tools can produce the same shifts in opinion based on which tools have been chosen by the users.

For instance, some organizations have declared they plan to develop an alternative to ChatGPT, designed to express highly conservative viewpoints.

The scientists stated that such technologies earn more public discussion concerning how they can be misused and must be tracked and regulated.

The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies. The more careful we might want to be about how we’re governing the values, priorities, and opinions built into them.

Maurice Jakesch, Doctoral Student, Field of Information Science, Cornell University

Contribution to this study was given by Advait Bhat from Microsoft Research, Daniel Buschek of the University of Bayreuth, and Lior Zalmanson of Tel Aviv University.

This study was financially supported by the National Science Foundation, the German National Academic Foundation, and the Bavarian State Ministry of Science, and the Arts.

Journal Reference

Jakesch, M., et al. (2023) Co-Writing with Opinionated Language Models Affects Users’ Views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.