Research Shows New Data-Gathering Strategy Could Help Make AI Less Biased

Currently, machine-learning systems are widely being used to determine everything from stock prices to medical diagnoses; therefore, it is now crucial to analyze how they make decisions.

A new strategy from MIT shows that the main culprit is not just the algorithms themselves, but also the way the data itself is gathered.

Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,” stated lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson. “But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.”

By analyzing specific examples, scientists could identify potential causes for variations in accuracies as well as measure the individual impact of each factor on the data. Then, they demonstrated how they could reduce each type of bias while maintaining the same level of predictive accuracy by changing the way they collected data.

We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions.

David Sontag, Professor, MIT.

According to Chen, the thought that more data is always better is one of the major misconceptions. Gathering more participants is not essentially helpful since getting them from the same population usually results in the same subgroups being marginalized. It has been shown that even the familiar image database ImageNet, with its several millions of images, is biased toward the Northern Hemisphere.

Often, the main thing is to go out and collect more data from those under-represented groups, says Sontag. For instance, the researchers analyzed an income-prediction system and found that it was two times as likely to misclassify male employees as high-income and female employees as low-income. They discovered that if the dataset had been increased by a factor of 10, the occurrence of such mistakes would be reduced by 40%.

In another dataset, the research team found that the ability of a system to predict intensive care unit (ICU) mortality was less accurate for Asian patients. Current strategies for minimizing discrimination would essentially just make the non-Asian predictions less accurate, which is challenging with regards to settings such as healthcare that can quite literally be life-or-death.

According to Chen, their strategy enables them to examine a dataset and determine how many more participants from different populations are required to enhance accuracy for the group with lower accuracy while at the same time preserving accuracy for the group with higher accuracy.

We can plot trajectory curves to see what would happen if we added 2,000 more people versus 20,000, and from that figure out what size the dataset should be if we want to have the best of all worlds With a more nuanced approach like this, hospitals and other institutions would be better equipped to do cost-benefit analyses to see if it would be useful to get more data.

Irene Chen, Study Lead Author and PhD student, MIT.

One can also try to collect extra data types from the existing participants. However, things will not get improved if the extra data collected is not really relevant, for example, statistics on people’s height for a study about IQ. The question then becomes how to identify when and for whom the extra information should be gathered.

One way is to identify clusters of patients with high differences in accuracy. In the case of ICU patients, a clustering method on text known as topic modeling showed that both cancer and cardiac patients had huge racial differences in accuracy. This finding could imply that more diagnostic tests for cancer or cardiac patients could decrease the racial differences in accuracy.

The paper will be presented by the researchers at the annual conference on Neural Information Processing Systems (NIPS) in Montreal, in December 2018.

Source: https://www.csail.mit.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.