🤖 AI Summary
This work addresses the current lack of open resources for automated or semi-automated assessment of AI systems’ compliance with regulatory frameworks such as the EU AI Act, a gap that forces reliance on error-prone manual methods. To bridge this, the authors propose an open, transparent, and reproducible compliance evaluation dataset tailored for NLP and Retrieval-Augmented Generation (RAG) systems, encompassing four tasks: risk-level classification, provision retrieval, obligation generation, and question answering. Innovatively integrating legal domain knowledge with large language models, the approach enables grounded, high-relevance generation of controlled scenarios, effectively tackling the challenge of ambiguous risk boundaries—such as those between limited and minimal risk—specified in the Act. Experimental results demonstrate the dataset’s efficacy, achieving F1 scores of 0.87 and 0.85 on prohibited and high-risk scenarios, respectively, in RAG system evaluations.
📝 Abstract
The rapid rollout of AI in heterogeneous public and societal sectors has subsequently escalated the need for compliance with regulatory standards and frameworks. The EU AI Act has emerged as a landmark in the regulatory landscape. The development of solutions that elicit the level of AI systems' compliance with such standards is often limited by the lack of resources, hindering the semi-automated or automated evaluation of their performance. This generates the need for manual work, which is often error-prone, resource-limited or limited to cases not clearly described by the regulation. This paper presents an open, transparent, and reproducible method of creating a resource that facilitates the evaluation of NLP models with a strong focus on RAG systems. We have developed a dataset that contain the tasks of risk-level classification, article retrieval, obligation generation, and question-answering for the EU AI Act. The dataset files are in a machine-to-machine appropriate format. To generate the files, we utilise domain knowledge as an exegetical basis, combining with the processing and reasoning power of large language models to generate scenarios along with the respective tasks. Our methodology demonstrates a way to harness language models for grounded generation with high document relevancy. Besides, we overcome limitations such as navigating the decision boundaries of risk-levels that are not explicitly defined within the EU AI Act, such as limited and minimal cases. Finally, we demonstrate our dataset's effectiveness by evaluating a RAG-based solution that reaches 0.87 and 0.85 F1-score for prohibited and high-risk scenarios.