D-HUMOR: Dark Humor Understanding via Multimodal Open-ended Reasoning

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Detecting black humor in internet memes is challenging due to its reliance on implicit, sensitive, and highly culture-dependent multimodal cues. To address this, we introduce the first large-scale Chinese meme dataset for black humor analysis (4,379 samples), supporting three tasks: black humor detection, target category identification, and intensity grading. Methodologically, we propose a Tri-stream Cross-Reasoning Network that jointly fuses OCR-extracted text, ViT-derived visual features, and structured reasoning sequences generated by a large vision-language model. We further innovate with a role-reversal self-cycling mechanism to better model cultural context and ironic logic. Experiments demonstrate significant improvements over strong baselines across all three tasks. Both the dataset and source code are publicly released to advance research in content safety and multimodal humor understanding.

Technology Category

Application Category

📝 Abstract
Dark humor in online memes poses unique challenges due to its reliance on implicit, sensitive, and culturally contextual cues. To address the lack of resources and methods for detecting dark humor in multimodal content, we introduce a novel dataset of 4,379 Reddit memes annotated for dark humor, target category (gender, mental health, violence, race, disability, and other), and a three-level intensity rating (mild, moderate, severe). Building on this resource, we propose a reasoning-augmented framework that first generates structured explanations for each meme using a Large Vision-Language Model (VLM). Through a Role-Reversal Self-Loop, VLM adopts the author's perspective to iteratively refine its explanations, ensuring completeness and alignment. We then extract textual features from both the OCR transcript and the self-refined reasoning via a text encoder, while visual features are obtained using a vision transformer. A Tri-stream Cross-Reasoning Network (TCRNet) fuses these three streams, text, image, and reasoning, via pairwise attention mechanisms, producing a unified representation for classification. Experimental results demonstrate that our approach outperforms strong baselines across three tasks: dark humor detection, target identification, and intensity prediction. The dataset, annotations, and code are released to facilitate further research in multimodal humor understanding and content moderation. Code and Dataset are available at: https://github.com/Sai-Kartheek-Reddy/D-Humor-Dark-Humor-Understanding-via-Multimodal-Open-ended-Reasoning
Problem

Research questions and friction points this paper is trying to address.

Detecting dark humor in multimodal online memes
Addressing implicit and culturally contextual cues
Classifying target categories and intensity levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Role-Reversal Self-Loop refines multimodal explanations iteratively
Tri-stream Cross-Reasoning Network fuses text, image, reasoning features
Large Vision-Language Model generates structured dark humor explanations
Sai Kartheek Reddy Kasu
Sai Kartheek Reddy Kasu
IIIT Dharwad, India
Social ComputingNatural Language ProcessingResponsible AI
M
Mohammad Zia Ur Rehman
Indian Institute of Technology Indore, India
S
Shahid Shafi Dar
Indian Institute of Technology Indore, India
R
Rishi Bharat Junghare
Indian Institute of Technology Indore, India
D
Dhanvin Sanjay Namboodiri
Malaviya National Institute of Technology Jaipur, India
N
Nagendra Kumar
Indian Institute of Technology Indore, India