Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of balancing accuracy and interpretability in covert hate meme detection, this paper proposes IntMeme, an interpretation-driven explainable framework. IntMeme leverages large multimodal models (LMMs) to generate human-readable semantic parses of memes and employs a decoupled dual-stream encoder to separately model the original multimodal input and its textual parse. Multimodal feature alignment and fusion enable joint reasoning over both representations. Unlike conventional pretrained vision-language models (PT-VLMs), IntMeme breaks the “black-box” paradigm by explicitly integrating fine-grained, text-based semantic parsing into the classification pipeline—a first in this domain. Evaluated on three benchmark datasets, IntMeme achieves state-of-the-art performance while delivering both high-accuracy predictions and faithful, traceable explanations grounded in interpretable semantic abstractions.

Technology Category

Application Category

📝 Abstract
Hateful meme detection presents a significant challenge as a multimodal task due to the complexity of interpreting implicit hate messages and contextual cues within memes. Previous approaches have fine-tuned pre-trained vision-language models (PT-VLMs), leveraging the knowledge they gained during pre-training and their attention mechanisms to understand meme content. However, the reliance of these models on implicit knowledge and complex attention mechanisms renders their decisions difficult to explain, which is crucial for building trust in meme classification. In this paper, we introduce IntMeme, a novel framework that leverages Large Multimodal Models (LMMs) for hateful meme classification with explainable decisions. IntMeme addresses the dual challenges of improving both accuracy and explainability in meme moderation. The framework uses LMMs to generate human-like, interpretive analyses of memes, providing deeper insights into multimodal content and context. Additionally, it uses independent encoding modules for both memes and their interpretations, which are then combined to enhance classification performance. Our approach addresses the opacity and misclassification issues associated with PT-VLMs, optimizing the use of LMMs for hateful meme detection. We demonstrate the effectiveness of IntMeme through comprehensive experiments across three datasets, showcasing its superiority over state-of-the-art models.
Problem

Research questions and friction points this paper is trying to address.

Improving hateful meme detection accuracy
Enhancing explainability in meme classification
Addressing opacity in multimodal content analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Large Multimodal Models
Generates human-like interpretive analyses
Uses independent encoding modules combination
🔎 Similar Papers
No similar papers found.