AI to Help Shine Light on Human Biases

AI has the potential to help determine biases in news reporting that one would not be able to see otherwise. At McGill University, scientists got a computer program to produce news coverage of COVID-19 by using the headlines from CBC articles as alerts.

AI to Help Shine Light on Human Biases
Online news on a smartphone and laptop. Image Credit: McGill University.

Further, they compared the simulated news coverage to the real reporting at the time and discovered that CBC coverage was less concentrated on the medical emergency and more positively centered on geopolitics and personalities.

Reporting on real-world events requires complex choices, including decisions about which events and players take center stage. By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies.

Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University

According to the team, assessing such alternatives is known to be crucial, provided there is a close relationship between public opinion, media framing, and government policy.

The AI saw COVID-19 primarily as a health emergency and interpreted the events in more bio-medical terms, whereas the CBC coverage tended to focus on person rather than disease-centered reporting. The CBC coverage was also more positive than expected given that it was a major health crisis—producing a sort of rally around the flag effect. This positivity works to downplay public fear.

Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University

Exploring How Biases Play Out

While numerous studies attempt to comprehend the biases inherent in AI, there is also a chance to harness it as a tool to disclose the biases of human expression, state the scientists.

The goal is to help us see things we might otherwise miss.

Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University

We’re not suggesting that the AI itself is unbiased. But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” stated Sil Hamilton, a research assistant and student working under the supervision of Professor Piper.

Using AI to Understand the Past, And One Day to Anticipate the Future

As far as the team are concerned this work is only the tip of the iceberg, setting a stage for new avenues of study where AI could be utilized not only to look at past human behavior but also to expect future actions—for instance, in predicting possible judicial or political results.

At present, the research team is working on a project with the help of AI to model American Supreme Court decision-making headed by Hamilton.

Hamilton stated, “Given past judicial behavior, how might justices respond to future pivotal cases or older cases that are being re-litigated? We hope new developments in AI can help.”

Source: https://www.mcgill.ca/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.