Academic

Rewriting the Narrative of AI Bias: A Data Feminist Critique of Algorithmic Inequalities in Healthcare

AI-driven healthcare systems perpetuate gendered and racialised health inequalities, misdiagnosing marginalised populations due to historical exclusions in medical research and dataset construction. These disparities are further reinforced by androcentric medical epistemologies where white male bodies are treated as the universal norm. Additionally, the ‘othering’ of marginalised communities manifests in algorithmic exclusions or biases, where AI systems flag non-dominant populations as statistical anomalies rather than central subjects, reinforcing structural biases in healthcare access and treatment. This article critically examines the framing of AI bias within legal narratives, particularly through the EU AI Act, arguing that bias is not merely a technical flaw, but a structural consequence of exclusionary knowledge production. The study integrates data feminism as a counter-narrative to dominant AI governance frameworks, applying insights from Richard Sherwin’s legal narrative the

P
Pin Lean Lau
· · 1 min read · 2 views

AI-driven healthcare systems perpetuate gendered and racialised health inequalities, misdiagnosing marginalised populations due to historical exclusions in medical research and dataset construction. These disparities are further reinforced by androcentric medical epistemologies where white male bodies are treated as the universal norm. Additionally, the ‘othering’ of marginalised communities manifests in algorithmic exclusions or biases, where AI systems flag non-dominant populations as statistical anomalies rather than central subjects, reinforcing structural biases in healthcare access and treatment. This article critically examines the framing of AI bias within legal narratives, particularly through the EU AI Act, arguing that bias is not merely a technical flaw, but a structural consequence of exclusionary knowledge production. The study integrates data feminism as a counter-narrative to dominant AI governance frameworks, applying insights from Richard Sherwin’s legal narrative theory, Kimberlé Crenshaw’s intersectionality theory, Carol Smart’s socio-legal critiques, and Ruha Benjamin’s abolitionist AI perspectives. The analysis highlights how specific articles in the EU AI Act: risk-based classification (Article 6), bias audits (Article 10), and transparency requirements (Article 13), reinforce androcentric, racialised, and neoliberal exclusions, failing to mandate intersectional accountability or structural interventions. By challenging the formalist bias framing in AI regulation, the article advocates for equity-driven AI governance through data feminism, embedding data sovereignty, participatory oversight, and redistributive justice.

Executive Summary

The article critiques AI-driven healthcare systems for perpetuating gendered and racialised health inequalities due to historical exclusions in medical research and dataset construction. It argues that AI bias is a structural consequence of exclusionary knowledge production, rather than a technical flaw, and advocates for equity-driven AI governance through data feminism. The study examines the EU AI Act, highlighting its limitations in addressing intersectional accountability and structural interventions. The article proposes a counter-narrative to dominant AI governance frameworks, integrating data feminism and intersectionality theory to promote redistributive justice and participatory oversight.

Key Points

  • AI-driven healthcare systems perpetuate gendered and racialised health inequalities
  • AI bias is a structural consequence of exclusionary knowledge production
  • The EU AI Act has limitations in addressing intersectional accountability and structural interventions

Merits

Intersectional Analysis

The article provides a nuanced intersectional analysis of AI bias, highlighting the complex interactions between gender, race, and other forms of marginalisation.

Data Feminist Perspective

The study integrates a data feminist perspective, offering a critical counter-narrative to dominant AI governance frameworks and promoting equity-driven AI governance.

Demerits

Limited Policy Recommendations

The article could benefit from more concrete policy recommendations for implementing equity-driven AI governance and addressing the limitations of the EU AI Act.

Expert Commentary

The article offers a timely and critical analysis of AI bias in healthcare, highlighting the need for a more nuanced understanding of the complex interactions between technology, society, and marginalisation. By integrating data feminism and intersectionality theory, the study provides a valuable counter-narrative to dominant AI governance frameworks. However, the article could benefit from more concrete policy recommendations and a more detailed analysis of the practical implications of implementing equity-driven AI governance.

Recommendations

  • The development of more inclusive and equitable AI systems in healthcare through participatory design and testing
  • The implementation of data sovereignty and redistributive justice mechanisms in AI governance to promote more equitable access to healthcare and treatment

Sources