LegalDuet: Learning Effective Representations for Legal Judgment Prediction through a Dual-View Legal Clue Reasoning

📅 2024-01-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing legal judgment prediction models overly rely on keyword matching, neglecting judges’ dual reasoning process—analogy to precedents and citation of statutory provisions—resulting in poor generalization on complex or low-frequency cases. To address this, we propose a dual-perspective legal cue reasoning framework: (1) Law Case Reasoning, which models precedent analogy via case similarity; and (2) Legal Ground Reasoning, which anchors predictions to relevant statutory provisions. Building upon this, we design a novel pretraining paradigm integrating dual-channel contrastive learning, legal semantic alignment, and charge-hierarchy-aware representation learning. Evaluated on CAIL2018, our method achieves state-of-the-art performance, improving average accuracy by approximately 4% and significantly reducing prediction uncertainty. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Most existing Legal Judgment Prediction (LJP) models focus on discovering the legal triggers in the criminal fact description. However, in real-world scenarios, a professional judge not only needs to assimilate the law case experience that thrives on past sentenced legal judgments but also depends on the professional legal grounded reasoning that learned from professional legal knowledge. In this paper, we propose a LegalDuet model, which pretrains language models to learn a tailored embedding space for making legal judgments. It proposes a dual-view legal clue reasoning mechanism, which derives from two reasoning chains of judges: 1) Law Case Reasoning, which makes legal judgments according to the judgment experiences learned from analogy/confusing legal cases; 2) Legal Ground Reasoning, which lies in matching the legal clues between criminal cases and legal decisions. Our experiments show that LegalDuet achieves state-of-the-art performance on the CAIL2018 dataset and outperforms baselines with about 4% improvements on average. Our dual-view reasoning based pretraining can capture critical legal clues to learn a tailored embedding space to distinguish criminal cases. It reduces LegalDuet's uncertainty during prediction and brings pretraining advances to the confusing/low frequent charges. All codes are available at https://github.com/NEUIR/LegalDuet.
Problem

Research questions and friction points this paper is trying to address.

Legal Judgment Prediction
Precedent-based Reasoning
Complex Case Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-perspective reasoning
Case-based and rule-matching prediction
Open-source code
🔎 Similar Papers
No similar papers found.
P
Pengjie Liu
Zhenghao Liu
Zhenghao Liu
Northeastern University
NLPInformation Retrieval
Xiaoyuan Yi
Xiaoyuan Yi
Senior Researcher, Microsoft Research Asia
Natural Language GenerationSocietal AILarge Language ModelResponsible AI
Liner Yang
Liner Yang
Associate Professor, Beijing Language and Culture University
Artificial IntelligenceNatural Language Processing
S
Shuo Wang
Microsoft Research Asia, Beijing, China
Y
Yu Gu
Northeastern University, Shenyang, Liaoning, China
G
Ge Yu
Northeastern University, Shenyang, Liaoning, China
X
Xing Xie
S
Shuang-hua Yang