Academics offer solution to unethical bias in AI

British and Swiss mathematicians and statisticians believe they have found a way of managing the risk of unethical bias in artificial intelligence systems.

AI is increasingly being deployed in commercial situations. In insurance, for instance, while there may be legitimate reasons for using AI to set different prices for different people, but it may also be profitable to "game their psychology or willingness to shop around".

In its joint paper, An Ethical Eye on AI, researchers and mathematicians from the University of Warwick, Imperial College London, EPFL Lausanne and Sciteb, say their new, mathematical idea reins in AI bias towards making unethical and potentially costly choices. It demonstrates that, of the vast number of potential strategies the technology has to choose from, some are unethical and will incur not just moral cost but a significant potential economic penalty if identified. This, the group posits, may lead to regulators levying significant fines, or to customers boycotting those brands.

There is therefore a clear incentive to understand in what circumstances AI might adopt an unethical strategy, and reduce or even eliminate that risk.

The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.

Professor MacKay said: “Our suggested Unethical Optimisation Principle can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.

“The principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimisation/learning process.”

    Share Story:

Recent Stories