Scientists from the University of Warwick, Imperial College London, EPFL (Lausanne), and Sciteb Ltd have discovered a mathematical method to help businesses and regulators manage, police bias of artificial intelligence (AI) systems in making unethical, possibly highly expensive and harmful commercial choices—an ethical eye on AI.
AI has been highly deployed in commercial applications. For instance, AI is used to fix the costs for insurance products to be sold to a specific customer. There are valid reasons for setting varying costs for different people. However, it might also be beneficial to “game” their psychology or readiness to shop around.
Although the AI includes a huge number of possible strategies to select from, a few are unethical and will involve moral cost and considerable possible economic penalty as stakeholders will apply some penalty if they discover that such an approach has been utilized.
In such cases, regulators may levy considerable fines of billions of dollars, pounds, or euros, and customers may boycott the company—or both.
Thus, in a setting where decisions are made more without human intervention, there is consequently a very powerful incentive to realize under what conditions AI systems might use an unethical plan and decrease that risk or remove completely if possible.
Statisticians and mathematicians from the University of Warwick, Imperial, EPFL, and Sciteb Ltd have collaborated to assist businesses and regulators to make a new “Unethical Optimization Principle” and offer an easy formula to estimate its effect.
The researchers have described the complete details in a paper titled, “An unethical optimization principle,” published in the Royal Society Open Science journal on Wednesday, July 1st, 2020.
The four authors of the study include Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute from the University of Warwick.
Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space.
Robert MacKay, Professor, Mathematics Institute, University of Warwick
MacKay continued, “Optimization can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process,” added MacKay.