🤖 AI Summary
This work proposes a novel end-to-end approach to argument mining by reframing the task as a text-to-text generation problem, thereby circumventing the complexity of traditional multi-stage pipelines that rely on rule-based post-processing and extensive hyperparameter tuning. Leveraging a pretrained encoder-decoder language model, the method jointly generates argument spans, their component types, and relational structures in a unified framework, eliminating the need for task-specific post-processing steps. This design significantly simplifies the overall pipeline and enhances structural adaptability. The approach achieves state-of-the-art performance across three established benchmark datasets—AAEC, AbstRCT, and CDCP—demonstrating both its effectiveness and generalizability in diverse argumentative contexts.
📝 Abstract
Argument Mining(AM) aims to uncover the argumentative structures within a text. Previous methods require several subtasks, such as span identification, component classification, and relation classification. Consequently, these methods need rule-based postprocessing to derive argumentative structures from the output of each subtask. This approach adds to the complexity of the model and expands the search space of the hyperparameters. To address this difficulty, we propose a simple yet strong method based on a text-to-text generation approach using a pretrained encoder-decoder language model. Our method simultaneously generates argumentatively annotated text for spans, components, and relations, eliminating the need for task-specific postprocessing and hyperparameter tuning. Furthermore, because it is a straightforward text-to-text generation method, we can easily adapt our approach to various types of argumentative structures. Experimental results demonstrate the effectiveness of our method, as it achieves state-of-the-art performance on three different types of benchmark datasets: the Argument-annotated Essays Corpus(AAEC), AbstRCT, and the Cornell eRulemaking Corpus(CDCP)