ArgMed-Agents: Explainable Clinical Decision Reasoning with LLM Disscusion via Argumentation Schemes

📅 2024-03-10
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the opacity, lack of verifiability, and insufficient explainability of large language models (LLMs) in clinical decision-making, this paper proposes a multi-agent self-iterative debate framework grounded in clinical argumentation patterns. Methodologically, it formally embeds argumentation schemes into LLM reasoning, constructs directed argument graphs to model conflicting viewpoints, and integrates a symbolic logic solver for automated verification of rational consistency. The key contributions are: (1) the first realization of an argumentation-scheme-driven, self-explanatory clinical debate mechanism for LLMs; and (2) a synergistic framework unifying argument graph structure with symbolic reasoning, ensuring both interpretability and formal guarantee of correctness. Experiments on complex clinical decision tasks demonstrate that our approach outperforms state-of-the-art prompting techniques in accuracy, while significantly improving user trust, explanation quality, and logical coherence of reasoning.

Technology Category

Application Category

📝 Abstract
There are two main barriers to using large language models (LLMs) in clinical reasoning. Firstly, while LLMs exhibit significant promise in Natural Language Processing (NLP) tasks, their performance in complex reasoning and planning falls short of expectations. Secondly, LLMs use uninterpretable methods to make clinical decisions that are fundamentally different from the clinician's cognitive processes. This leads to user distrust. In this paper, we present a multi-agent framework called ArgMed-Agents, which aims to enable LLM-based agents to make explainable clinical decision reasoning through interaction. ArgMed-Agents performs self-argumentation iterations via Argumentation Scheme for Clinical Discussion (a reasoning mechanism for modeling cognitive processes in clinical reasoning), and then constructs the argumentation process as a directed graph representing conflicting relationships. Ultimately, use symbolic solver to identify a series of rational and coherent arguments to support decision. We construct a formal model of ArgMed-Agents and present conjectures for theoretical guarantees. ArgMed-Agents enables LLMs to mimic the process of clinical argumentative reasoning by generating explanations of reasoning in a self-directed manner. The setup experiments show that ArgMed-Agents not only improves accuracy in complex clinical decision reasoning problems compared to other prompt methods, but more importantly, it provides users with decision explanations that increase their confidence.
Problem

Research questions and friction points this paper is trying to address.

Medical Decision Making
Large Language Models
Explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

ArgMed-Agents
Medical Decision-Making
Large Language Models
Shengxin Hong
Shengxin Hong
Hubei University of Technology, Wuhan, China
L
Liang Xiao
Hubei University of Technology, Wuhan, China
X
Xin Zhang
Hubei University of Technology, Wuhan, China
J
Jianxia Chen
Hubei University of Technology, Wuhan, China