🤖 AI Summary
Detecting black humor in internet memes is challenging due to its reliance on implicit, sensitive, and highly culture-dependent multimodal cues. To address this, we introduce the first large-scale Chinese meme dataset for black humor analysis (4,379 samples), supporting three tasks: black humor detection, target category identification, and intensity grading. Methodologically, we propose a Tri-stream Cross-Reasoning Network that jointly fuses OCR-extracted text, ViT-derived visual features, and structured reasoning sequences generated by a large vision-language model. We further innovate with a role-reversal self-cycling mechanism to better model cultural context and ironic logic. Experiments demonstrate significant improvements over strong baselines across all three tasks. Both the dataset and source code are publicly released to advance research in content safety and multimodal humor understanding.
📝 Abstract
Dark humor in online memes poses unique challenges due to its reliance on implicit, sensitive, and culturally contextual cues. To address the lack of resources and methods for detecting dark humor in multimodal content, we introduce a novel dataset of 4,379 Reddit memes annotated for dark humor, target category (gender, mental health, violence, race, disability, and other), and a three-level intensity rating (mild, moderate, severe). Building on this resource, we propose a reasoning-augmented framework that first generates structured explanations for each meme using a Large Vision-Language Model (VLM). Through a Role-Reversal Self-Loop, VLM adopts the author's perspective to iteratively refine its explanations, ensuring completeness and alignment. We then extract textual features from both the OCR transcript and the self-refined reasoning via a text encoder, while visual features are obtained using a vision transformer. A Tri-stream Cross-Reasoning Network (TCRNet) fuses these three streams, text, image, and reasoning, via pairwise attention mechanisms, producing a unified representation for classification. Experimental results demonstrate that our approach outperforms strong baselines across three tasks: dark humor detection, target identification, and intensity prediction. The dataset, annotations, and code are released to facilitate further research in multimodal humor understanding and content moderation. Code and Dataset are available at: https://github.com/Sai-Kartheek-Reddy/D-Humor-Dark-Humor-Understanding-via-Multimodal-Open-ended-Reasoning