AI is unfair. It can be inaccurate (in several ways), biased (in several ways, and to several groups), disproportionate, exploitable, and opaque. The policy world is awash in AI-governance frameworks, ethical guidelines, and other policy documents, but these lack concrete standards and provide little guidance on how to select between competing versions of (un)fairness. In other words, they abdicate the responsibility of setting priorities among values. At the same time, many of the policy documents harshly criticize AI and algorithmic tools for deficiencies in some particular aspect of fairness without considering whether alternative designs that fix the problem would make the system more “unfair” in other aspects. Ad-hoc criticism is abundant and hard to predict.
This article offers a strategy for AI ethics officers to navigate the “abdication/ad-hoc criticism” problems in the regulatory landscape. After explaining the meaning and sources of the most important forms of AI unfairness, we argue that AI developers should make the inevitable tradeoffs between fairness goals as consciously and intentionally as the context will allow. Beyond that, in the absence of clear legal requirements to prioritize one form of fairness over another, an algorithm that makes well-considered trade-offs between values should be deemed “fair enough.”