đ¤ AI Summary
The EU AI Act (AIA) exhibits structural deficienciesâparticularly ambiguous legal definitions and insufficient technical specificationsâin its robustness and cybersecurity requirements for high-risk AI systems (Art. 15) and general-purpose AI models (Art. 55). This paper is the first to systematically identify and analyze the lawâtechnology gap embedded in these provisions. Leveraging interdisciplinary analysisâincluding statutory interpretation, machine learning robustness theory (e.g., adversarial robustness, out-of-distribution generalization), and cybersecurity practiceâwe assess the operational feasibility of the requirements. Our contribution is a cross-disciplinary compliance framework that delivers actionable recommendations for the European Commissionâs guidance documents, harmonized standard development, and the benchmarking methodology stipulated under AIA Art. 15(2). By aligning legal terminology with empirically grounded ML security research, the framework advances precise, implementation-ready resilience governanceâthereby addressing a critical gap in the AIAâs regulatory architecture.
đ Abstract
The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems (Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.