Academic

Algorithmic and Non-Algorithmic Fairness: Should We Revise our View of the Latter Given Our View of the Former?

Abstract In the US context, critics of court use of algorithmic risk prediction algorithms have argued that COMPAS involves unfair machine bias because it generates higher false positive rates of predicted recidivism for black offenders than for white offenders. In response, some have argued that algorithmic fairness concerns, either also or only, calibration across groups–roughly, that a score assigned to different individuals by the algorithm involves the same probability of the individual having the target property across different groups of individuals–and that, for mathematical reasons, it is virtually impossible to equalize false positive rates without impairing the calibration. I argue that in standard non-algorithmic contexts, such as hirings, we do not think that lack of calibration entails unfair bias, and that it is difficult to see why algorithmic contexts, as it were, should differ fairness-wise from non-algorithmic ones in this respect. Hence, we should reject t

K
Kasper Lippert-Rasmussen
· · 1 min read · 15 views

Abstract In the US context, critics of court use of algorithmic risk prediction algorithms have argued that COMPAS involves unfair machine bias because it generates higher false positive rates of predicted recidivism for black offenders than for white offenders. In response, some have argued that algorithmic fairness concerns, either also or only, calibration across groups–roughly, that a score assigned to different individuals by the algorithm involves the same probability of the individual having the target property across different groups of individuals–and that, for mathematical reasons, it is virtually impossible to equalize false positive rates without impairing the calibration. I argue that in standard non-algorithmic contexts, such as hirings, we do not think that lack of calibration entails unfair bias, and that it is difficult to see why algorithmic contexts, as it were, should differ fairness-wise from non-algorithmic ones in this respect. Hence, we should reject the view that calibration is necessary for fairness in an algorithmic context.

Executive Summary

The article explores the debate surrounding algorithmic fairness, particularly focusing on the COMPAS risk assessment tool used in the US judicial system. It critiques the argument that higher false positive rates for black offenders compared to white offenders indicate unfair machine bias. The author contends that calibration across groups, which ensures that the probability of a target property is consistent across different groups, is not a necessary condition for fairness in algorithmic contexts. The article argues that non-algorithmic contexts, such as hiring practices, do not require calibration for fairness, and thus algorithmic contexts should not be held to a different standard.

Key Points

  • Critics argue that COMPAS exhibits unfair bias due to higher false positive rates for black offenders.
  • Some defend COMPAS by emphasizing calibration across groups as a fairness metric.
  • The author argues that calibration is not necessary for fairness in algorithmic contexts, citing non-algorithmic examples.
  • The article suggests that algorithmic fairness should not be judged differently from non-algorithmic fairness.

Merits

Balanced Perspective

The article provides a balanced view by acknowledging both sides of the debate and offering a nuanced argument against the necessity of calibration for fairness.

Clear Argumentation

The author presents a clear and logical argument, supported by examples from non-algorithmic contexts, to challenge the prevailing view on algorithmic fairness.

Demerits

Limited Scope

The article focuses primarily on the US context and the COMPAS algorithm, which may limit the generalizability of its arguments to other jurisdictions or algorithms.

Assumptions About Non-Algorithmic Fairness

The argument assumes that non-algorithmic contexts do not require calibration for fairness, which may not hold true in all cases or contexts.

Expert Commentary

The article presents a compelling argument that challenges the current emphasis on calibration as a metric for algorithmic fairness. By drawing parallels with non-algorithmic contexts, the author effectively questions the necessity of calibration in algorithmic decision-making. However, the argument could be strengthened by addressing potential counterarguments and considering a broader range of examples. The article's focus on the US context and the COMPAS algorithm limits its generalizability, but the core argument is relevant to the ongoing debate on algorithmic fairness. The implications for policy and practice are significant, as they suggest a need to revisit current standards and regulations. Overall, the article contributes valuable insights to the discourse on algorithmic fairness and highlights the importance of a balanced and nuanced approach to this complex issue.

Recommendations

  • Further research should explore the applicability of the arguments to other jurisdictions and algorithms to enhance the generalizability of the findings.
  • Policymakers should consider revisiting current standards for algorithmic fairness to ensure they are aligned with non-algorithmic standards and are not overly stringent.

Sources