🤖 AI Summary
This study examines structural tensions in the EU AI Act’s non-discrimination regulation: high-risk AI systems face stringent oversight yet suffer from misalignment between input data governance and output monitoring, while general-purpose AI (e.g., large language models) lacks concrete fairness requirements. Adopting an interdisciplinary legal-computer science approach—combining close legal text analysis, mapping of algorithmic fairness theories onto regulatory provisions, and assessment of compliance feasibility—the paper systematically identifies institutional fragmentation and enforcement gaps in anti-discrimination oversight across these two AI categories. Its three principal contributions are: (1) clarifying that non-discrimination obligations under the AI Act apply exclusively to high-risk systems; (2) exposing regulatory inconsistency between data provenance and decisional outcomes within the high-risk compliance chain; and (3) proposing a cross-domain joint auditing methodology framework to bridge governance silos—thereby furnishing both theoretical foundations and actionable pathways for future standardization. (149 words)
📝 Abstract
What constitutes a fair decision? This question is not only difficult for humans but becomes more challenging when Artificial Intelligence (AI) models are used. In light of discriminatory algorithmic behaviors, the EU has recently passed the AI Act, which mandates specific rules for AI models, incorporating both traditional legal non-discrimination regulations and machine learning based algorithmic fairness concepts. This paper aims to bridge these two different concepts in the AI Act through: First a high-level introduction of both concepts targeting legal and computer science-oriented scholars, and second an in-depth analysis of the AI Act's relationship between legal non-discrimination regulations and algorithmic fairness. Our analysis reveals three key findings: (1.), most non-discrimination regulations target only high-risk AI systems. (2.), the regulation of high-risk systems encompasses both data input requirements and output monitoring, though these regulations are often inconsistent and raise questions of computational feasibility. (3.) Regulations for General Purpose AI Models, such as Large Language Models that are not simultaneously classified as high-risk systems, currently lack specificity compared to other regulations. Based on these findings, we recommend developing more specific auditing and testing methodologies for AI systems. This paper aims to serve as a foundation for future interdisciplinary collaboration between legal scholars and computer science-oriented machine learning researchers studying discrimination in AI systems.