Posted in | News | Cybernetics | Machine-Vision

Online Talks on Chess-Piece Colors Could Trigger Hate-Speech Detection Software

The latest TV miniseries about a chess master—“The Queen’s Gambit”—might have kindled higher interest in chess. However, a quick piece of advice follows: social media talk related to game-piece colors could cause misunderstandings, at least for hate-speech detection software.

A new study from LTI researchers shows that chess descriptors such as black, white, attack, and threat can trigger hate-speech detectors on social media. Image Credit: Carnegie Melon University School of Computer Science.

Two researchers from Carnegie Mellon University suspect that is what happened to Antonio Radić, or 'agadmator,' a Croatian chess player hosting a popular YouTube channel. His account was blocked in June 2020 for 'harmful and dangerous' content.

According to Ashiqur R. KhudaBukhsh, a project scientist in CMU’s Language Technologies Institute (LTI), YouTube failed to explain and reinstate the channel within 24 hours.

However, there are chances that Radić’s 'black vs. white' talk during his interview with Grandmaster Hikaru Nakamura activated software that automatically identifies racist language, he proposed.

We don’t know what tools YouTube uses, but if they rely on artificial intelligence to detect racist language, this kind of accident can happen,” KhudaBukhsh said. And if it happened publicly to someone as high profile as Radić, it may well be happening quietly to lots of other people who are not so well known.

To verify whether this was possible, KhudaBukhsh and Rupak Sarkar, an LTI course research engineer, tested the two most advanced speech classifiers—a kind of AI software that can be trained to identify hate speech indications.

The classifiers were used by the researchers to screen over 680,000 comments collected from five popular chess-focused YouTube channels.

Then, the researchers randomly sampled 1,000 comments that were flagged as hate speech by at least one of the classifiers. A manual review of those comments revealed that the vast majority—82%—did not have any hate speech. Words like black, white, threat and attack appeared to be triggers, noted the researchers.

Similar to other AI programs that are based on machine learning, these classifiers have been trained with a huge number of examples and their accuracy can differ based on the set of examples used.

For example, KhudaBukhsh recollected an exercise he faced as a student, where the aim was to find 'active dogs' and 'lazy dogs' in a set of pictures. Several of the training pictures of active dogs displayed wide areas of grass since running dogs were usually in the distance.

Consequently, at times, the program recognized photos with huge amounts of grass as examples of active dogs, even if the photos did not have any dogs.

When it comes to chess, most of the training data sets probably have some examples of chess talk, resulting in misclassification, he explained.

The research article by KhudaBukhsh and Sarkar, a recent graduate of Kalyani Government Engineering College in India, was awarded the Best Student Abstract Three-Minute Presentation recently at the Association for the Advancement of AI annual conference.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.