🤖 AI Summary
Existing argument mining models exhibit poor generalization, heavily relying on dataset-specific lexical features rather than underlying argumentative structures, leading to substantial performance degradation across datasets. Method: This paper presents the first systematic evaluation of mainstream Transformer models’ generalization across 17 English argumentation datasets, empirically uncovering widespread “lexical shortcut” overfitting. We propose a novel paradigm integrating task-specific contrastive pretraining with multi-benchmark joint training, and establish a cross-dataset zero-shot and few-shot transfer evaluation framework. Contribution/Results: While models achieve >85% F1 on seen datasets, their average F1 drops by 32% on unseen ones. Our approach improves cross-domain F1 by up to 19.6%, significantly enhancing structural understanding and generalization robustness—providing a critical pathway toward trustworthy deployment of argument analysis models.
📝 Abstract
Identifying arguments is a necessary prerequisite for various tasks in automated discourse analysis, particularly within contexts such as political debates, online discussions, and scientific reasoning. In addition to theoretical advances in understanding the constitution of arguments, a significant body of research has emerged around practical argument mining, supported by a growing number of publicly available datasets. On these benchmarks, BERT-like transformers have consistently performed best, reinforcing the belief that such models are broadly applicable across diverse contexts of debate. This study offers the first large-scale re-evaluation of such state-of-the-art models, with a specific focus on their ability to generalize in identifying arguments. We evaluate four transformers, three standard and one enhanced with contrastive pre-training for better generalization, on 17 English sentence-level datasets as most relevant to the task. Our findings show that, to varying degrees, these models tend to rely on lexical shortcuts tied to content words, suggesting that apparent progress may often be driven by dataset-specific cues rather than true task alignment. While the models achieve strong results on familiar benchmarks, their performance drops markedly when applied to unseen datasets. Nonetheless, incorporating both task-specific pre-training and joint benchmark training proves effective in enhancing both robustness and generalization.