Academic

In search of effectiveness and fairness in proving algorithmic discrimination in EU law

Examples of discriminatory algorithmic recruitment of workers have triggered a debate on application of the non-discrimination principle in the EU. Algorithms challenge two principles in the system of evidence in EU non-discrimination law. The first is effectiveness, given that due to algorithmic opacity, the parties in algorithmic discrimination cases do not have easy and unrestricted access to facts enabling them to support their claims. The second is fairness, insofar as the launching and unfolding of the evidentiary debate requires lifting the veil of algorithmic opacity: a colossal task, placing unrealistic burdens of proof on claimants and respondents. Algorithmic discrimination thus seems impossible to prove and, consequently, falls outside the scope of application of EU non-discrimination law. Two possible solutions are proposed. First, as regards effectiveness, a right to access evidence in favour of victims of algorithmic discrimination should be recognized, through a joint r

L
Ljupcho Grozdanovski
· · 1 min read · 8 views

Examples of discriminatory algorithmic recruitment of workers have triggered a debate on application of the non-discrimination principle in the EU. Algorithms challenge two principles in the system of evidence in EU non-discrimination law. The first is effectiveness, given that due to algorithmic opacity, the parties in algorithmic discrimination cases do not have easy and unrestricted access to facts enabling them to support their claims. The second is fairness, insofar as the launching and unfolding of the evidentiary debate requires lifting the veil of algorithmic opacity: a colossal task, placing unrealistic burdens of proof on claimants and respondents. Algorithmic discrimination thus seems impossible to prove and, consequently, falls outside the scope of application of EU non-discrimination law. Two possible solutions are proposed. First, as regards effectiveness, a right to access evidence in favour of victims of algorithmic discrimination should be recognized, through a joint reading of EU non-discrimination law and the General Data Protection Regulation. Second, to allocate the burden of proof more proportionately, an extension of the grounds for defence of respondents could allow them to establish that biases were autonomously developed by an algorithm.

Executive Summary

The article 'In search of effectiveness and fairness in proving algorithmic discrimination in EU law' explores the challenges of applying the non-discrimination principle in the EU to algorithmic decision-making. The authors argue that algorithmic opacity hinders the effectiveness and fairness of evidence in non-discrimination cases. They propose solutions, including recognizing a right to access evidence for victims and extending the grounds for defense for respondents to establish autonomously developed algorithmic biases. The article highlights the need for legal adaptations to address the unique challenges posed by algorithmic discrimination.

Key Points

  • Algorithmic opacity challenges the effectiveness and fairness of evidence in EU non-discrimination law.
  • Proposed solutions include recognizing a right to access evidence and extending grounds for defense for respondents.
  • The article argues for legal adaptations to address algorithmic discrimination.

Merits

Comprehensive Analysis

The article provides a thorough examination of the challenges posed by algorithmic discrimination in the context of EU non-discrimination law, offering a nuanced understanding of the issues.

Innovative Solutions

The proposed solutions are innovative and address the core problems of effectiveness and fairness in proving algorithmic discrimination.

Demerits

Implementation Challenges

The solutions proposed may face significant implementation challenges, particularly in terms of practical application and enforcement.

Limited Scope

The article focuses primarily on EU law, which may limit its applicability to other jurisdictions with different legal frameworks.

Expert Commentary

The article effectively highlights the critical challenges posed by algorithmic discrimination in the context of EU non-discrimination law. The authors' proposal to recognize a right to access evidence for victims and extend the grounds for defense for respondents is a significant contribution to the ongoing debate. However, the practical implementation of these solutions may be complex and require substantial legal and technical adaptations. The article's focus on EU law, while comprehensive, may limit its applicability to other jurisdictions. Nonetheless, the insights provided are valuable and could inform broader discussions on AI ethics and data privacy. The article's rigorous analysis and innovative solutions make it a noteworthy contribution to the field.

Recommendations

  • Further research should explore the practical implementation of the proposed solutions in various legal and technical contexts.
  • Policymakers should consider updating non-discrimination laws to specifically address algorithmic decision-making and ensure fairness and transparency.

Sources