Posted in | News | Medical Robotics

Attempts to Eliminate Gender Bias in Natural Language Processing

Scientists have discovered an improved way to reduce gender bias in natural language recognition models while conserving key info about words and meanings. This could be a significant step toward resolving the problem of human biases sneaking into machine learning.

Attempts to Eliminate Gender Bias in Natural Language Processing

Image Credit: Shutterstock.com/PopTika

While a computer is an unbiased machine, humans produce much of the information and programming which travel through them. This can be an issue if conscious or unconscious human biases are replicated in the text samples used by AI algorithms to study and “understand” language.

Computers do not grasp text right away, explains Lei Ding, the study’s first author and a graduate student in the Department of Mathematical and Statistical Sciences. To comprehend them, they must turn words into a series of numbers, a process known as word embedding.

Natural language processing is teaching the computers to understand texts and languages.

Bei Jiang, Associate Professor, Mathematical and Statistical Sciences. University of Alberta

After this phase, scientists can plot words as numbers on a 2D graph and visualize the words’ associations with one another. This enables them to grasp the scope of the gender prejudice better and, later, to judge if the bias was successfully removed.

All the Meaning, None of the Bias

Though prior attempts to decrease or eliminate gender bias in texts have had some success, the difficulty with those methods is that gender prejudice is not the only thing eliminated from the texts.

In many gender debiasing methods, when they reduce the bias in a word vector, they also reduce or eliminate important information about the word,” explains Jiang. This is called semantic information because it provides vital contextual data that may be required in future activities using those word embeddings.

For instance, when evaluating the word “nurse,” scientists want the system to eliminate any gender information linked with that phrase while keeping the information that relates it to similar words like doctor, hospital, and medicine.

We need to preserve that semantic information. Without it, the embeddings would have very bad performance [in natural language processing tasks and systems],” says Ding

Fast, Accurate — and Fair

The novel technology also surpassed existing debiasing algorithms in various word embedding-based tasks.

The methodology may provide a flexible foundation for other researchers to adapt to their word embeddings as it evolves. As long as a scientist is given instructions on which terms to use, the methodology can be utilized to eliminate prejudice associated with any specific group.

While the methodology still needs scientist involvement at this time, Ding explains that, in the future, it may be possible to have a type of built-in system or filter that might automatically delete gender bias in several settings.

The proposed mechanism is part of a wider effort called BIAS: Responsible AI for Gender and Ethnic Labor Market Equality, which aims to solve real-world issues.

For instance, people reading the same job advertisement may respond differently to specific phrases in the description that commonly have a gendered link.

A system based on the methods developed by Ding and his colleagues would be able to identify terms that may influence a potential applicant’s impression of the position or choice to apply due to perceived gender prejudice and offer alternative phrases to lessen this bias.

While many AI models and systems are focused on discovering ways to do jobs more quickly and accurately, Ding notes that the team’s work is part of a developing discipline that tries to make advancements in another essential component of these models and systems.

People are focusing more on responsibility and fairness within artificial intelligence systems,” says Ding.

The Canada-UK Artificial Intelligence Initiative funded the research.

Journal Reference:

Ding, L., et al. (2022) Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving. AAAI. doi.org/10.1609/aaai.v36i11.21443.

Source: https://www.ualberta.ca/index.html

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.