🤖 AI Summary
To address the limitation of case-based reasoning (CBR) in legal AI—its reliance on the precedent consistency assumption—this paper proposes the Generalized Reason Model (GRM), the first explanatory framework enabling argumentative reasoning under inconsistent precedents. Methodologically, we extend the Derivable State Argument (DSA) framework by integrating nonmonotonic logic and a dynamic conflict-resolution mechanism, thereby supporting modeling, principled weighing, and traceable inference over conflicting precedents. Our key contributions are threefold: (1) we relax the traditional consistency constraint to establish the first explainable reasoning framework explicitly designed for inconsistent precedents; (2) we enable structured argument generation and transparent attribution of judicial conclusions; and (3) empirical evaluation demonstrates that GRM produces reasonable, robust, and human-interpretable legal reasoning outcomes even under complex precedent conflicts, significantly enhancing the explainability and trustworthiness of legal AI systems.
📝 Abstract
Precedential constraint is one foundation of case-based reasoning in AI and Law. It generally assumes that the underlying set of precedents must be consistent. To relax this assumption, a generalized notion of the reason model has been introduced. While several argumentative explanation approaches exist for reasoning with precedents based on the traditional consistent reason model, there has been no corresponding argumentative explanation method developed for this generalized reasoning framework accommodating inconsistent precedents. To address this question, this paper examines an extension of the derivation state argumentation framework (DSA-framework) to explain the reasoning according to the generalized notion of the reason model.