Context-Aware Pragmatic Metacognitive Prompting for Sarcasm Detection

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sarcasm detection poses significant challenges for pre-trained language models (PLMs) and large language models (LLMs) due to linguistic ambiguity, cultural specificity, and strong contextual dependence. To address these issues, we propose the Retrieval-Aware Context Enhancement (RACE) framework—a novel approach that synergistically integrates Pragmatic Metacognitive Prompting (PMP), external web retrieval, and model self-knowledge extraction to dynamically model culturally sensitive expressions and implicit semantics. RACE employs non-parametric knowledge injection and introspective knowledge activation, enhancing the model’s ability to discern complex sarcasm. Evaluated on three benchmark datasets—Twitter Indonesia Sarcastic, MUStARD, and SemEval—RACE achieves macro-F1 improvements of 9.87%, 4.08%, and 3.29%, respectively. These results demonstrate its robust generalization across diverse online communities and multilingual settings, underscoring both its practical efficacy and methodological advancement.

Technology Category

Application Category

📝 Abstract
Detecting sarcasm remains a challenging task in the areas of Natural Language Processing (NLP) despite recent advances in neural network approaches. Currently, Pre-trained Language Models (PLMs) and Large Language Models (LLMs) are the preferred approach for sarcasm detection. However, the complexity of sarcastic text, combined with linguistic diversity and cultural variation across communities, has made the task more difficult even for PLMs and LLMs. Beyond that, those models also exhibit unreliable detection of words or tokens that require extra grounding for analysis. Building on a state-of-the-art prompting method in LLMs for sarcasm detection called Pragmatic Metacognitive Prompting (PMP), we introduce a retrieval-aware approach that incorporates retrieved contextual information for each target text. Our pipeline explores two complementary ways to provide context: adding non-parametric knowledge using web-based retrieval when the model lacks necessary background, and eliciting the model's own internal knowledge for a self-knowledge awareness strategy. We evaluated our approach with three datasets, such as Twitter Indonesia Sarcastic, SemEval-2018 Task 3, and MUStARD. Non-parametric retrieval resulted in a significant 9.87% macro-F1 improvement on Twitter Indonesia Sarcastic compared to the original PMP method. Self-knowledge retrieval improves macro-F1 by 3.29% on Semeval and by 4.08% on MUStARD. These findings highlight the importance of context in enhancing LLMs performance in sarcasm detection task, particularly the involvement of culturally specific slang, references, or unknown terms to the LLMs. Future work will focus on optimizing the retrieval of relevant contextual information and examining how retrieval quality affects performance. The experiment code is available at: https://github.com/wllchrst/sarcasm-detection_pmp_knowledge-base.
Problem

Research questions and friction points this paper is trying to address.

Detecting sarcasm remains challenging for NLP models
PLMs and LLMs struggle with sarcasm's linguistic diversity
Models lack contextual grounding for sarcasm analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieves contextual information for sarcasm detection
Uses web-based retrieval for external knowledge
Elicits internal model knowledge for self-awareness
🔎 Similar Papers
No similar papers found.
M
Michael Iskandardinata
Computer Science Department, School of Computer Science, Bina Nusantara University, Jakarta, Indonesia
W
William Christian
Computer Science Department, School of Computer Science, Bina Nusantara University, Jakarta, Indonesia
Derwin Suhartono
Derwin Suhartono
Computer Science Department, Bina Nusantara University
Artificial IntelligenceComputational LinguisticsPersonality Recognition