Do not go gentle into that good night: The European Union's and China's different approaches to the extraterritorial application of artificial intelligence laws and regulations
Wisconsin Law Review’s 2025 Symposium
The Wisconsin Law Review presents: The Shadow Carceral State Registration available here.Date and Time Friday, September 26 9:00am – 5:30pm CDT Location Madison Museum of Contemporary Art 227 State Street Madison, WI 53703 CLE for this event is pending.Summary On...
Fly in the Face of Bias: Algorithmic Bias in Law Enforcement’s Facial Recognition Technology and the Need for an Adaptive Legal Framework
Algorithmic Bias in Hiring Algorithms: A Kenyan Perspective
The use of machine learning algorithms has permeated into nearly all aspects of life. With this steady integration, tasks previously handled by humans are increasingly falling into the ‘hands’ of machines. Ideally this would be celebrated as a great improvement...
Privacy-Preserving Models for Legal Natural Language Processing
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which...
Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements
Machine learning models have become pervasive in our everyday life; they decide on important matters influencing our education, employment and judicial system. Many of these predictive systems are commercial products protected by trade secrets, hence their decision-making is opaque. Therefore,...
Responsible Legal Augmentation: Integrating Generative AI into Legal Practice
This article examines Ayinde v London Borough of Haringey; Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), a landmark High Court judgment addressing the use of generative artificial intelligence (GenAI) in legal practice. The case arose when counsels submitted...
Auditing of AI in Railway Technology – a European Legal Approach
Abstract Artificial intelligence (AI) promises major gains in productivity, safety and convenience through automation. Despite the associated euphoria, care needs to be taken to ensure that no immature, unsafe products enter the market, especially in high-risk areas. Artificial intelligence systems...
Governance in Ethical, Trustworthy AI Systems: Extension of the ECCOLA Method for AI Ethics Governance Using GARP
Background: The continuous development of artificial intelligence (AI) and increasing rate of adoption by software startups calls for governance measures to be implemented at the design and development stages to help mitigate AI governance concerns. Most AI ethical design and...
Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI
Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective
Abstract AI will change many aspects of the world we live in, including the way corporations are governed. Many efficiencies and improvements are likely, but there are also potential dangers, including the threat of harmful impacts on third parties, discriminatory...
AI and Bias in Recruitment: Ensuring Fairness in Algorithmic Hiring.
The integration of Artificial Intelligence (AI) in recruitment processes has revolutionized hiring by increasing efficiency, reducing time-to-hire, and enabling data-driven decision-making. However, despite these advancements, concerns about algorithmic bias and fairness remain central to ethical AI deployment. This paper explores...
How Can the Law Address the Effects of Algorithmic Bias in the Healthcare Context?
This paper examines how UK ‘hard laws’ can adapt to regulate algorithmic bias in the healthcare context. I explore the causes of algorithmic bias which sets the foundation for how the law will address this issue. I critically analyse elements...
Beyond bias: algorithmic machines, discrimination law and the analogy trap
Automated Data Bias Mitigation Technique for Algorithmic Fairness
Machine learning fairness enhancement methods based on data bias correction are usually divided into two processes: The determination of sensitive attributes (such as race and gender) and the correction of data bias. In terms of determining sensitive attributes, existing studies...
Ethical Considerations in AI: Bias Mitigation and Fairness in Algorithmic Decision Making
The rapid integration of artificial intelligence (AI) into critical decision-making domains—such as healthcare, finance, law enforcement, and hiring—has raised significant ethical concerns regarding bias and fairness. Algorithmic decision-making systems, if not carefully designed and monitored, risk perpetuating and amplifying societal...
Data bias, algorithmic discrimination and the fairness issues of individual credit accessibility
PurposeThis study examines the impact of data bias and algorithmic discrimination on individual credit accessibility in China’s financial system. It aims to align financial inclusion and equity goals with statistical fairness conditions by constructing fairness metrics from multiple dimensions. The...
NeurIPS 2025 Mexico City –Call for Workshops
NeurIPS Creative AI Track 2025: Humanity
NeurIPS 2025 Mexico City –Call for Socials