🤖 AI Summary
This work addresses the insufficient robustness of large language models (LLMs) in legal reasoning and the lack of rigorous, controllable evaluation benchmarks. Methodologically, we introduce the first parameterizable benchmark for legal argumentation attack graphs, grounded in Dung’s abstract argumentation framework. Our approach integrates parameterized graph generation with templated natural language translation to yield semantically unambiguous, continuously scalable, and dynamically extensible evaluation instances. Contributions include: (1) establishing the first systematic, explainability-aware benchmark for legal reasoning evaluation; (2) empirically revealing that mainstream LLMs exhibit high error rates and unstable performance even on low-complexity legal reasoning tasks; and (3) demonstrating that even advanced reasoning-optimized models fail significantly under higher complexity—thereby validating the benchmark’s rigor, discriminative power, and practical utility for stress-testing legal AI systems.
📝 Abstract
Generative large language models as tools in the legal domain have the potential to improve the justice system. However, the reasoning behavior of current generative models is brittle and poorly understood, hence cannot be responsibly applied in the domains of law and evidence. In this paper, we introduce an approach for creating benchmarks that can be used to evaluate the reasoning capabilities of generative language models. These benchmarks are dynamically varied, scalable in their complexity, and have formally unambiguous interpretations. In this study, we illustrate the approach on the basis of witness testimony, focusing on the underlying argument attack structure. We dynamically generate both linear and non-linear argument attack graphs of varying complexity and translate these into reasoning puzzles about witness testimony expressed in natural language. We show that state-of-the-art large language models often fail in these reasoning puzzles, already at low complexity. Obvious mistakes are made by the models, and their inconsistent performance indicates that their reasoning capabilities are brittle. Furthermore, at higher complexity, even state-of-the-art models specifically presented for reasoning capabilities make mistakes. We show the viability of using a parametrized benchmark with varying complexity to evaluate the reasoning capabilities of generative language models. As such, the findings contribute to a better understanding of the limitations of the reasoning capabilities of generative models, which is essential when designing responsible AI systems in the legal domain.