Argumentative Reasoning with Language Models on Non-factorized Case Bases

๐Ÿ“… 2025-12-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses case-based reasoning (CBR) over non-factorized (i.e., unstructured, unpreprocessed) case repositories. We propose the Argumentative Agent-based CBR framework (AAM-CBR), the first to tightly integrate abstract argumentation theory with large language models (LLMs). AAM-CBR enables factor-level reasoning without exposing raw cases or requiring prior factorization, via LLM-driven dynamic factor extraction and an argumentation-based coverage assessment mechanism. Experiments on a synthetic credit-approval dataset demonstrate that AAM-CBR significantly outperforms single-prompt baselines on multi-factor novel cases; validate the necessity of synergistic symbolic reasoning and LLMs for high-dimensional CBR; and reveal inherent limitations of pure LLMs in scenarios involving complex factor interactions. The framework establishes a new paradigm for privacy-preserving, scalable CBRโ€”eliminating reliance on case preprocessing while ensuring transparency and logical soundness in inference.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we investigate how language models can perform case-based reasoning (CBR) on non-factorized case bases. We introduce a novel framework, argumentative agentic models for case-based reasoning (AAM-CBR), which extends abstract argumentation for case-based reasoning (AA-CBR). Unlike traditional approaches that require factorization of previous cases, AAM-CBR leverages language models to determine case coverage and extract factors based on new cases. This enables factor-based reasoning without exposing or preprocessing previous cases, thus improving both flexibility and privacy. We also present initial experiments to assess AAM-CBR performance by comparing the proposed framework with a baseline that uses a single-prompt approach to incorporate both new and previous cases. The experiments are conducted based on a synthetic credit card application dataset. The result shows that AAM-CBR surpasses the baseline only when the new case contains a richer set of factors. The finding indicates that language models can handle case-based reasoning with a limited number of factors, but face challenges as the number of factors increase. Consequently, integrating symbolic reasoning with language models, as implemented in AAM-CBR, is crucial for effectively handling cases involving many factors.
Problem

Research questions and friction points this paper is trying to address.

Enables case-based reasoning without factorizing previous cases
Improves flexibility and privacy in argumentative reasoning with language models
Integrates symbolic reasoning to handle cases with many factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language models determine case coverage and extract factors
Framework enables factor-based reasoning without preprocessing previous cases
Integrates symbolic reasoning with language models for many factors
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wachara Fungwacharakorn
Center of Juris-Informatics, Joint Support-Center for Data Science Research, ROIS, Tokyo, Japan
M
May Myo Zin
Center of Juris-Informatics, Joint Support-Center for Data Science Research, ROIS, Tokyo, Japan
H
Ha-Thanh Nguyen
Center of Juris-Informatics, Joint Support-Center for Data Science Research, ROIS, Tokyo, Japan; Research and Development Center for Large Language Models, NII, ROIS, Tokyo, Japan
Y
Yuntao Kong
Center of Juris-Informatics, Joint Support-Center for Data Science Research, ROIS, Tokyo, Japan
Ken Satoh
Ken Satoh
National Institute of Informatics
Artificial Intelligence