Sarcasm Detection as a Catalyst: Improving Stance Detection with Cross-Target Capabilities

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in stance detection (SD): (1) performance degradation caused by sarcastic language, and (2) severe scarcity of labeled data in cross-target stance detection (CTSD). To tackle these, we propose a zero-shot transfer learning framework that leverages irony detection as an intermediate task. Innovatively, we formulate irony detection as a pre-transfer task for SD and design a two-stage fine-tuning architecture combining BERT/RoBERTa with deep neural networks. Our method enables zero-shot cross-target stance prediction without requiring any target-domain labeled data. Experiments demonstrate substantial improvements: on in-domain SD, macro-F1 increases significantly, and 85% of previously misclassified sarcastic instances are correctly identified; on CTSD, our approach surpasses all state-of-the-art baselines and matches the performance of fully supervised in-domain models. This is the first systematic study to validate the efficacy of irony-aware transfer learning for SD, establishing a novel paradigm for low-resource stance analysis.

Technology Category

Application Category

📝 Abstract
Stance Detection (SD) has become a critical area of interest due to its applications in various contexts leading to increased research within NLP. Yet the subtlety and complexity of texts sourced from online platforms often containing sarcastic language pose significant challenges for SD algorithms in accurately determining the authors stance. This paper addresses this by employing sarcasm for SD. It also tackles the issue of insufficient annotated data for training SD models on new targets by conducting Cross-Target SD (CTSD). The proposed approach involves fine-tuning BERT and RoBERTa models followed by concatenating additional deep learning layers. The approach is assessed against various State-Of-The-Art baselines for SD demonstrating superior performance using publicly available datasets. Notably our model outperforms the best SOTA models on both in-domain SD and CTSD tasks even before the incorporation of sarcasm-detection pre-training. The integration of sarcasm knowledge into the model significantly reduces misclassifications of sarcastic text elements in SD allowing our model to accurately predict 85% of texts that were previously misclassified without sarcasm-detection pre-training on in-domain SD. This enhancement contributes to an increase in the models average macro F1-score. The CTSD task achieves performance comparable to that of the in-domain task despite using a zero-shot finetuning. We also reveal that the success of the transfer-learning framework relies on the correlation between the lexical attributes of sarcasm detection and SD. This study represents the first exploration of sarcasm detection as an intermediate transfer-learning task within the context of SD while also leveraging the concatenation of BERT or RoBERTa with other deep-learning techniques. The proposed approach establishes a foundational baseline for future research in this domain.
Problem

Research questions and friction points this paper is trying to address.

Improves stance detection by integrating sarcasm detection.
Addresses insufficient annotated data via cross-target stance detection.
Enhances model accuracy with BERT and RoBERTa fine-tuning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes BERT and RoBERTa models
Concatenates additional deep learning layers
Integrates sarcasm detection for stance prediction
🔎 Similar Papers
No similar papers found.