🤖 AI Summary
Existing legal large language model (LLM) evaluations rely predominantly on static benchmarks, failing to capture the dynamic nature of real-world legal practice and procedural compliance requirements—thus hindering the advancement of legal AI. Method: We introduce J1-ENVS, the first interactive, dynamic legal environment designed specifically for LLM agents, covering six prototypical scenarios from Chinese judicial practice and featuring multi-level complexity. Concurrently, we propose J1-EVAL, a novel dynamic evaluation framework enabling fine-grained, joint assessment of both task execution capability and procedural compliance. Contribution/Results: Extensive experiments across 17 state-of-the-art LLM agents reveal that even top-performing models such as GPT-4o achieve an overall score below 60%, underscoring the substantial challenge of dynamic legal reasoning. This work establishes a new paradigm, benchmark, and methodology for advancing legal intelligence from static knowledge acquisition toward dynamic, procedurally grounded competence.
📝 Abstract
The gap between static benchmarks and the dynamic nature of real-world legal practice poses a key barrier to advancing legal intelligence. To this end, we introduce J1-ENVS, the first interactive and dynamic legal environment tailored for LLM-based agents. Guided by legal experts, it comprises six representative scenarios from Chinese legal practices across three levels of environmental complexity. We further introduce J1-EVAL, a fine-grained evaluation framework, designed to assess both task performance and procedural compliance across varying levels of legal proficiency. Extensive experiments on 17 LLM agents reveal that, while many models demonstrate solid legal knowledge, they struggle with procedural execution in dynamic settings. Even the SOTA model, GPT-4o, falls short of 60% overall performance. These findings highlight persistent challenges in achieving dynamic legal intelligence and offer valuable insights to guide future research.