Posted in | News | Machine-Vision

New Artificial Intelligence Tool for Detecting Unfair Discrimination

A research team at Penn State and Columbia University has created a novel artificial intelligence (AI) tool for identifying unfair discrimination on the basis of gender or race.

Image Credits: CNStock/shutterstock.com

For example, a long-standing concern of civilized societies is to prevent unfair treatment of individuals on the basis of gender, race, or ethnicity. Yet, it can very difficult to detect such discrimination resulting from decisions, whether by automated AI systems or human decision.

This difficult is further compounded by the extensive implementation of AI systems to automate decisions in a number of domains, such as business, higher education, consumer finance, and policing.

Artificial intelligence systems--such as those involved in selecting candidates for a job or for admission to a university—are trained on large amounts of data. But if these data are biased, they can affect the recommendations of AI systems.

Vasant Honavar, Professor and Edward Frymoyer Chair, College of Information Sciences and Technology, Penn State

For instance, if a company traditionally has never hired a woman for a specific kind of job, then an AI system trained on this past data will not recommend a woman for a new job, he stated.

There’s nothing wrong with the machine learning algorithm itself,” stated Honavar. “It's doing what it's supposed to do, which is to identify good job candidates based on certain desirable characteristics. But since it was trained on historical, biased data it has the potential to make unfair recommendations.”

The researchers developed an advanced AI tool for identifying discrimination with regard to a protected attribute, such as gender or race, by AI systems or human decision makers based on the notion of causality in which a single thing—a cause—leads to another thing—an effect.

For example, the question, 'Is there gender-based discrimination in salaries?' can be reframed as, 'Does gender have a causal effect on salary?,' or in other words, 'Would a woman be paid more if she was a man?

Aria Khademi, Graduate Student, College of Information Sciences and Technology, Penn State

Considering that is it not viable to directly find the answer to such a hypothetical question, the AI tool developed by the researchers utilizes advanced counterfactual inference algorithms to arrive at the most optimum guess.

For instance,” continued Khademi, “one intuitive way of arriving at a best guess as to what a fair salary would be for a female employee is to find a male employee who is similar to the woman with respect to qualifications, productivity and experience. We can minimize gender-based discrimination in salary if we ensure that similar men and women receive similar salaries.”

To test their method, the investigators used different types of available data, like income data from the U.S. Census Bureau to establish whether gender-based discrimination exists in salaries. They further tested their approach by using the stop-and-frisk program data of the New York City Police Department to find out whether discrimination exists against people of color in arrests made after stops. The study results have been reported in May in Proceedings of The Web Conference 2019.

We analyzed an adult income data set containing salary, demographic and employment-related information for close to 50,000 individuals,” stated Honavar. “We found evidence of gender-based discrimination in salary. Specifically, we found that the odds of a woman having a salary greater than $50,000 per year is only one-third that for a man. This would suggest that employers should look for and correct, when appropriate, gender bias in salaries.”

The researchers’ analysis of the New York stop-and-frisk dataset, which contains demographic and other data about drivers stopped by the New York City police force, indeed exposed evidence of potential racial discrimination against African American and Hispanic individuals. However, it did not found any evidence of discrimination against them on average as a team.

You cannot correct for a problem if you don't know that the problem exists. To avoid discrimination on the basis of race, gender or other attributes you need effective tools for detecting discrimination. Our tool can help with that.

Vasant Honavar, Professor and Edward Frymoyer Chair, College of Information Sciences and Technology, Penn State

Honavar continued that as data-driven AI systems increasingly establish the way how banks decide who receives a loan, how police departments track groups or individuals for criminal activity, how businesses target advertisements to consumers, how universities and colleges decide who receives financial aid or gets admitted, who employers decide to hire, there is a crucial need for tools like the one developed by him and his colleagues.

Our tool,” Honavar said, “can help ensure that such systems do not become instruments of discrimination, barriers to equality, threats to social justice and sources of unfairness.”

Other authors who contributed to the paper are Sanghack Lee, an associate research scientist at Columbia University and former graduate student in information sciences and technology, Penn State, and David Foley, a graduate student in informatics, Penn State.

The study was supported by the National Institutes of Health and National Science Foundation.

Source: https://www.psu.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.