EU-Agent-Bench: Measuring Illegal Behavior of LLM Agents Under EU Law

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) agents operating within the EU risk non-compliance with stringent legal frameworks such as the GDPR and the Equal Treatment Directive, yet no benchmark exists to systematically evaluate their legal adherence. Method: We introduce EU-LegalBench—the first evaluation benchmark explicitly designed for legal compliance assessment of LLM agents under EU law. It comprises (1) a manually curated, verifiable test suite covering high-risk domains including data protection, algorithmic discrimination, and research integrity, grounded in primary EU legislation; (2) a legislative citation alignment mechanism that explicitly links model behaviors to specific statutory provisions; and (3) controlled system-prompting experiments to quantify how embedding legal text affects compliance performance. Contribution/Results: Empirical results demonstrate that explicit legal prompting significantly improves compliance rates. EU-LegalBench includes a publicly available preview set and a controlled private test set, establishing a reproducible, attributable, and legally grounded paradigm for assessing LLMs’ legal safety in regulated environments.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed as agents in various contexts by providing tools at their disposal. However, LLM agents can exhibit unpredictable behaviors, including taking undesirable and/or unsafe actions. In order to measure the latent propensity of LLM agents for taking illegal actions under an EU legislative context, we introduce EU-Agent-Bench, a verifiable human-curated benchmark that evaluates an agent's alignment with EU legal norms in situations where benign user inputs could lead to unlawful actions. Our benchmark spans scenarios across several categories, including data protection, bias/discrimination, and scientific integrity, with each user request allowing for both compliant and non-compliant execution of the requested actions. Comparing the model's function calls against a rubric exhaustively supported by citations of the relevant legislature, we evaluate the legal compliance of frontier LLMs, and furthermore investigate the compliance effect of providing the relevant legislative excerpts in the agent's system prompt along with explicit instructions to comply. We release a public preview set for the research community, while holding out a private test set to prevent data contamination in evaluating upcoming models. We encourage future work extending agentic safety benchmarks to different legal jurisdictions and to multi-turn and multilingual interactions. We release our code on href{https://github.com/ilijalichkovski/eu-agent-bench}{this URL}.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents' compliance with EU legal standards
Measuring illegal behavior risks in automated agent systems
Benchmarking AI safety across data protection and discrimination scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark evaluates LLM agent legal compliance
Compares function calls against legislative citations
Tests compliance effect of legislative prompts
Ilija Lichkovski
Ilija Lichkovski
Unknown affiliation
physicsmachine learning
A
Alexander Müller
AI Safety Initiative Groningen
M
Mariam Ibrahim
AI Safety Initiative Groningen
T
Tiwai Mhundwa
AI Safety Initiative Groningen