
Document Type
Article
Publication Date
6-2025
Publication Citation
100 Indiana Law Journal 1431
Abstract
The dominant critique of algorithmic fairness in AI decision-making, particularly in criminal justice, is that increasing fairness reduces the accuracy of predictions, thereby imposing a cost on society. This Article challenges that assumption by empirically analyzing the COMPAS algorithm, a widely used and widely discussed risk assessment tool in the U.S. criminal justice system.
This Article makes two contributions. First, it demonstrates that widely used AI models do more than replicate existing biases—they exacerbate them. Using causal inference methods, we show that racial bias is not only present in the COMPAS dataset but also worsened by AI models such as COMPAS. This finding has implications for legal scholarship and policymaking, as it (a) challenges the assumption that AI can offer an objective or neutral improvement over human decision-making and (b) provides counterevidence to the idea that AI merely mirrors preexisting human biases.
Second, this Article reframes the debate over the cost of fairness in algorithmic decision-making for criminal justice. It shows that applying fairness constraints does not necessarily lead to a cost in terms of loss in predictive accuracy regarding recidivism. AI systems operationalize concepts such as risk by making implicit and often flawed normative choices about what to predict and how to predict it. The claim that fair AI models decrease accuracy assumes that the model’s prediction is an optimal baseline. Fairness constraints, in fact, can correct distortions introduced by biased outcome variables—which magnify systemic racial disparities in rearrest data rather than reflect actual risk. In some cases, interventions can introduce algorithmic fairness without imposing the cost often presumed in policy discussions.
These findings are consequential beyond criminal justice. Similar dynamics exist in AI-driven decision-making in lending, hiring, and housing, where biased outcome variables reinforce systemic inequalities beyond the choices of proxies. By providing empirical evidence that fairness constraints can improve rather than undermine decision-making, this Article advances the conversation on how law and policy should approach AI bias, particularly when algorithmic decisions affect fundamental rights.
Recommended Citation
Cofone, Ignacio and Khern-am-nuai, Warut
(2025)
"The Overstated Cost of AI Fairness in Criminal Justice,"
Indiana Law Journal: Vol. 100:
Iss.
4, Article 4.
Available at:
https://www.repository.law.indiana.edu/ilj/vol100/iss4/4
Included in
Civil Rights and Discrimination Commons, Criminal Procedure Commons, Science and Technology Law Commons