Enhancing Conflict Resolution in Language Models via Abstract Argumentation

📅 2024-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to identify and resolve conflicts arising from incomplete or inconsistent information in consensus-building and persuasion tasks, resulting in poor interpretability and weak generalization. Method: This paper introduces the first systematic integration of abstract argumentation—a formal framework for reasoning about conflicting claims—into LLM training. We propose a fine-grained, process-explainable conflict resolution paradigm combining argument-structure modeling, self-explanatory generation, and process-aware supervised learning. Our curated argumentation framework dataset explicitly supports modeling premise-level conflicts, inferential dependencies, and conclusion acceptability. Contribution/Results: Our method significantly outperforms chain-of-thought baselines in conflict resolution accuracy. Explanation-aware training improves cross-scenario generalization accuracy by over 23%. Moreover, it generates traceable, verifiable argumentative paths—enhancing transparency and mitigating the LLM black-box problem.

Technology Category

Application Category

📝 Abstract
In recent years, large language models (LLMs) have made significant advancements in developing human-like and engaging dialogue systems. However, in tasks such as consensus-building and persuasion, LLMs often struggle to resolve conflicts arising from incomplete or inconsistent information, revealing their limitations in real-world applications. Given these limitations, abstract argumentation, a specialized logical framework designed to resolve conflicts and inconsistencies, becomes particularly relevant. In this paper, we aim to enhance the conflict-solving capabilities of LLMs by leveraging formal abstract argumentation, integrating language model learning with symbolic computation. To achieve this, we develop and curate a dataset comprising diverse abstract argumentation frameworks, accompanied by detailed explanations of the argument acceptability computation process. Subsequently, we fine-tune LLMs on this dataset, focusing on abstract conflict resolution tasks. As a comparative baseline, LLMs are also evaluated using a chain-of-thought approach, however, they fail to solve the conflict-based arguments effectively. Our experiments demonstrate that process explanations play a crucial role in learning. Models trained with explanations exhibit superior generalization accuracy compared to those trained solely on question-answer pairs. Furthermore, leveraging LLMs'self-explanation capabilities, our approach provides detailed illustrations that mitigate the lack of transparency typically associated with neural networks.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with conflict resolution from incomplete information
Abstract argumentation framework enhances conflict-solving capabilities
Process explanations improve model generalization and transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating abstract argumentation framework with language models
Fine-tuning models on argument acceptability computation dataset
Using process explanations to enhance generalization and transparency
🔎 Similar Papers
No similar papers found.
Zhaoqun Li
Zhaoqun Li
Zhejiang University
AI
X
Xiaotong Fang
Zhejiang University
C
Chen Chen
Zhejiang University
M
Mengze Li
Zhejiang University