๐ค AI Summary
This work addresses case-based reasoning (CBR) over non-factorized (i.e., unstructured, unpreprocessed) case repositories. We propose the Argumentative Agent-based CBR framework (AAM-CBR), the first to tightly integrate abstract argumentation theory with large language models (LLMs). AAM-CBR enables factor-level reasoning without exposing raw cases or requiring prior factorization, via LLM-driven dynamic factor extraction and an argumentation-based coverage assessment mechanism. Experiments on a synthetic credit-approval dataset demonstrate that AAM-CBR significantly outperforms single-prompt baselines on multi-factor novel cases; validate the necessity of synergistic symbolic reasoning and LLMs for high-dimensional CBR; and reveal inherent limitations of pure LLMs in scenarios involving complex factor interactions. The framework establishes a new paradigm for privacy-preserving, scalable CBRโeliminating reliance on case preprocessing while ensuring transparency and logical soundness in inference.
๐ Abstract
In this paper, we investigate how language models can perform case-based reasoning (CBR) on non-factorized case bases. We introduce a novel framework, argumentative agentic models for case-based reasoning (AAM-CBR), which extends abstract argumentation for case-based reasoning (AA-CBR). Unlike traditional approaches that require factorization of previous cases, AAM-CBR leverages language models to determine case coverage and extract factors based on new cases. This enables factor-based reasoning without exposing or preprocessing previous cases, thus improving both flexibility and privacy. We also present initial experiments to assess AAM-CBR performance by comparing the proposed framework with a baseline that uses a single-prompt approach to incorporate both new and previous cases. The experiments are conducted based on a synthetic credit card application dataset. The result shows that AAM-CBR surpasses the baseline only when the new case contains a richer set of factors. The finding indicates that language models can handle case-based reasoning with a limited number of factors, but face challenges as the number of factors increase. Consequently, integrating symbolic reasoning with language models, as implemented in AAM-CBR, is crucial for effectively handling cases involving many factors.