Strengthening legal protection against discrimination by algorithms and artificial intelligence

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper examines discrimination risks arising from algorithmic decision-making in domains such as criminal prediction, credit scoring, and hiring, and systematically evaluates the applicability and limitations of existing EU non-discrimination law and the GDPR in regulating AI-driven discrimination. Employing doctrinal legal analysis, cross-member-state case comparison, and policy assessment, the study integrates indirect discrimination theory with data protection frameworks to identify critical gaps in attribution, burden of proof, and enforcement mechanisms. Its principal contributions are threefold: first, it proposes concrete enforcement enhancements—including strengthened inter-agency regulatory coordination and mandatory algorithmic impact assessments; second, it introduces, for the first time, a “domain-differentiated” regulatory model that tailors rules to high-stakes sectors (e.g., finance, justice, employment); and third, it offers a legally rigorous yet operationally feasible institutional framework for EU AI governance.

Technology Category

Application Category

📝 Abstract
Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific - rather than general - rules, and outlines an approach to regulate algorithmic decision-making.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Europe's legal protection against discriminatory algorithmic decisions
Identifying weaknesses in non-discrimination and data protection laws for AI
Proposing improved enforcement and sector-specific algorithmic regulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates current European legal protection frameworks
Proposes sector-specific rules for algorithmic regulation
Strengthens enforcement of non-discrimination and data laws
🔎 Similar Papers
No similar papers found.