🤖 AI Summary
This paper examines discrimination risks arising from algorithmic decision-making in domains such as criminal prediction, credit scoring, and hiring, and systematically evaluates the applicability and limitations of existing EU non-discrimination law and the GDPR in regulating AI-driven discrimination. Employing doctrinal legal analysis, cross-member-state case comparison, and policy assessment, the study integrates indirect discrimination theory with data protection frameworks to identify critical gaps in attribution, burden of proof, and enforcement mechanisms. Its principal contributions are threefold: first, it proposes concrete enforcement enhancements—including strengthened inter-agency regulatory coordination and mandatory algorithmic impact assessments; second, it introduces, for the first time, a “domain-differentiated” regulatory model that tailors rules to high-stakes sectors (e.g., finance, justice, employment); and third, it offers a legally rigorous yet operationally feasible institutional framework for EU AI governance.
📝 Abstract
Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific - rather than general - rules, and outlines an approach to regulate algorithmic decision-making.