AI Act Evaluation Benchmark: An Open, Transparent, and Reproducible Evaluation Dataset for NLP and RAG Systems

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the current lack of open resources for automated or semi-automated assessment of AI systems’ compliance with regulatory frameworks such as the EU AI Act, a gap that forces reliance on error-prone manual methods. To bridge this, the authors propose an open, transparent, and reproducible compliance evaluation dataset tailored for NLP and Retrieval-Augmented Generation (RAG) systems, encompassing four tasks: risk-level classification, provision retrieval, obligation generation, and question answering. Innovatively integrating legal domain knowledge with large language models, the approach enables grounded, high-relevance generation of controlled scenarios, effectively tackling the challenge of ambiguous risk boundaries—such as those between limited and minimal risk—specified in the Act. Experimental results demonstrate the dataset’s efficacy, achieving F1 scores of 0.87 and 0.85 on prohibited and high-risk scenarios, respectively, in RAG system evaluations.

Technology Category

Application Category

📝 Abstract
The rapid rollout of AI in heterogeneous public and societal sectors has subsequently escalated the need for compliance with regulatory standards and frameworks. The EU AI Act has emerged as a landmark in the regulatory landscape. The development of solutions that elicit the level of AI systems' compliance with such standards is often limited by the lack of resources, hindering the semi-automated or automated evaluation of their performance. This generates the need for manual work, which is often error-prone, resource-limited or limited to cases not clearly described by the regulation. This paper presents an open, transparent, and reproducible method of creating a resource that facilitates the evaluation of NLP models with a strong focus on RAG systems. We have developed a dataset that contain the tasks of risk-level classification, article retrieval, obligation generation, and question-answering for the EU AI Act. The dataset files are in a machine-to-machine appropriate format. To generate the files, we utilise domain knowledge as an exegetical basis, combining with the processing and reasoning power of large language models to generate scenarios along with the respective tasks. Our methodology demonstrates a way to harness language models for grounded generation with high document relevancy. Besides, we overcome limitations such as navigating the decision boundaries of risk-levels that are not explicitly defined within the EU AI Act, such as limited and minimal cases. Finally, we demonstrate our dataset's effectiveness by evaluating a RAG-based solution that reaches 0.87 and 0.85 F1-score for prohibited and high-risk scenarios.
Problem

Research questions and friction points this paper is trying to address.

AI regulation
EU AI Act
compliance evaluation
NLP systems
RAG systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Act compliance
RAG evaluation
grounded generation
risk-level classification
reproducible benchmark
🔎 Similar Papers
No similar papers found.
A
Athanasios Davvetas
National Centre for Scientific Research “Demokritos”, Institute of Informatics and Telecommunications, Aghia Paraskevi, Greece
M
Michael Papademas
National Centre for Scientific Research “Demokritos”, Institute of Informatics and Telecommunications, Aghia Paraskevi, Greece; Department of Communication, Media and Culture, Panteion, University of Social and Political Sciences, Athens, Greece
X
Xenia Ziouvelou
National Centre for Scientific Research “Demokritos”, Institute of Informatics and Telecommunications, Aghia Paraskevi, Greece
Vangelis Karkaletsis
Vangelis Karkaletsis
NCSR "Demokritos"
Natural language processingknowledge representationartificial intelligence