🤖 AI Summary
This study addresses the challenges of Chinese sarcasm detection, which suffers from data scarcity and high annotation costs, as well as the neglect of individual users’ linguistic idiosyncrasies in existing approaches. To overcome these limitations, the work proposes a novel framework that integrates users’ long-term language behavior into sarcasm recognition by combining generative adversarial networks (GANs) with large language models such as GPT-3.5 for data augmentation. This approach yields SinaSarc, a multidimensional sarcasm corpus enriched with user history. Furthermore, the authors extend the BERT architecture to dynamically model personalized language styles. Experimental results demonstrate significant improvements over current state-of-the-art models, achieving F1 scores of 0.9151 and 0.9138 for sarcastic and non-sarcastic classes, respectively.
📝 Abstract
Sarcasm is a rhetorical device that expresses criticism or emphasizes characteristics of certain individuals or situations through exaggeration, irony, or comparison. Existing methods for Chinese sarcasm detection are constrained by limited datasets and high construction costs, and they mainly focus on textual features, overlooking user-specific linguistic patterns that shape how opinions and emotions are expressed. This paper proposes a Generative Adversarial Network (GAN) and Large Language Model (LLM)-driven data augmentation framework to dynamically model users' linguistic patterns for enhanced Chinese sarcasm detection. First, we collect raw data from various topics on Sina Weibo. Then, we train a GAN on these data and apply a GPT-3.5 based data augmentation technique to synthesize an extended sarcastic comment dataset, named SinaSarc. This dataset contains target comments, contextual information, and user historical behavior. Finally, we extend the BERT architecture to incorporate multi-dimensional information, particularly user historical behavior, enabling the model to capture dynamic linguistic patterns and uncover implicit sarcastic cues in comments. Experimental results demonstrate the effectiveness of our proposed method. Specifically, our model achieves the highest F1-scores on both the non-sarcastic and sarcastic categories, with values of 0.9138 and 0.9151 respectively, which outperforms all existing state-of-the-art (SOTA) approaches. This study presents a novel framework for dynamically modeling users' long-term linguistic patterns in Chinese sarcasm detection, contributing to both dataset construction and methodological advancement in this field.