🤖 AI Summary
Social media parody detection faces challenges including strong contextual dependency, scarcity of bilingual data, and insufficient structural modeling. This paper introduces the first bilingual (Chinese-English), graph-structured benchmark dataset for parody detection—comprising 14,755 users and 21,210 comments—and proposes a novel three-level heterogeneous user-comment-reply graph to model sociolinguistic propagation mechanisms. We design a unified cross-lingual, multi-task evaluation framework integrating parody detection and sentiment analysis. Empirical results demonstrate that lightweight models (e.g., BERT+SVM) significantly outperform state-of-the-art large language models (GPT-4o, DeepSeek-R1), underscoring the critical role of structured contextual modeling in parody understanding. To foster reproducibility and advancement, we publicly release the dataset, source code, and evaluation protocols—thereby supporting research in cultural computing and AI robustness.
📝 Abstract
Parody is an emerging phenomenon on social media, where individuals imitate a role or position opposite to their own, often for humor, provocation, or controversy. Detecting and analyzing parody can be challenging and is often reliant on context, yet it plays a crucial role in understanding cultural values, promoting subcultures, and enhancing self-expression. However, the study of parody is hindered by limited available data and deficient diversity in current datasets. To bridge this gap, we built seven parody datasets from both English and Chinese corpora, with 14,755 annotated users and 21,210 annotated comments in total. To provide sufficient context information, we also collect replies and construct user-interaction graphs to provide richer contextual information, which is lacking in existing datasets. With these datasets, we test traditional methods and Large Language Models (LLMs) on three key tasks: (1) parody detection, (2) comment sentiment analysis with parody, and (3) user sentiment analysis with parody. Our extensive experiments reveal that parody-related tasks still remain challenging for all models, and contextual information plays a critical role. Interestingly, we find that, in certain scenarios, traditional sentence embedding methods combined with simple classifiers can outperform advanced LLMs, i.e. DeepSeek-R1 and GPT-o3, highlighting parody as a significant challenge for LLMs.