🤖 AI Summary
This study addresses the challenge of structuring legal texts, which traditionally relies heavily on manual annotation, by proposing the first fully automated and domain-agnostic regulatory rule extraction pipeline. The approach comprises four stages: document standardization, semantic decomposition, multidimensional evaluation guided by 19 interpretable criteria, and upstream-prioritized iterative refinement under constrained computational budgets—enabling high-quality rule extraction without any labeled data. Innovatively integrating an LLM-as-a-judge mechanism with an auditable self-iterative optimization strategy, the method demonstrates significant performance gains across financial regulation, healthcare, and AI governance domains. Compliance-oriented question answering based on the extracted rules achieves accuracies of 73.8% in single-rule settings and 84.0% in broad-domain retrieval scenarios.
📝 Abstract
Regulatory documents encode legally binding obligations that LLM-based systems must respect. Yet converting dense, hierarchically structured legal text into machine-readable rules remains a costly, expert-intensive process. We present De Jure, a fully automated, domain-agnostic pipeline for extracting structured regulatory rules from raw documents, requiring no human annotation, domain-specific prompting, or annotated gold data. De Jure operates through four sequential stages: normalization of source documents into structured Markdown; LLM-driven semantic decomposition into structured rule units; multi-criteria LLM-as-a-judge evaluation across 19 dimensions spanning metadata, definitions, and rule semantics; and iterative repair of low-scoring extractions within a bounded regeneration budget, where upstream components are repaired before rule units are evaluated. We evaluate De Jure across four models on three regulatory corpora spanning finance, healthcare, and AI governance. On the finance domain, De Jure yields consistent and monotonic improvement in extraction quality, reaching peak performance within three judge-guided iterations. De Jure generalizes effectively to healthcare and AI governance, maintaining high performance across both open- and closed-source models. In a downstream compliance question-answering evaluation via RAG, responses grounded in De Jure extracted rules are preferred over prior work in 73.8% of cases at single-rule retrieval depth, rising to 84.0% under broader retrieval, confirming that extraction fidelity translates directly into downstream utility. These results demonstrate that explicit, interpretable evaluation criteria can substitute for human annotation in complex regulatory domains, offering a scalable and auditable path toward regulation-grounded LLM alignment.