🤖 AI Summary
This study addresses the limitation of existing hate speech datasets, which predominantly focus on explicit toxicity and struggle to capture implicit hate embedded within disinformation narratives. To bridge this gap, the authors introduce HateMirage, a novel dataset comprising 4,530 YouTube comments annotated along three fine-grained dimensions: target, intent, and impact. They further propose the first multidimensional explanatory framework that integrates disinformation reasoning with harm attribution, overcoming the constraints of traditional unidimensional or token-level approaches. The data are curated from fact-checking–verified debunked content, and model-generated explanations are evaluated using ROUGE-L F1 and Sentence-BERT metrics. Benchmark experiments reveal that explanation quality depends more critically on the diversity and reasoning orientation of pretraining data than on model scale, establishing a new benchmark for interpretable hate speech detection.
📝 Abstract
Subtle and indirect hate speech remains an underexplored challenge in online safety research, particularly when harmful intent is embedded within misleading or manipulative narratives. Existing hate speech datasets primarily capture overt toxicity, underrepresenting the nuanced ways misinformation can incite or normalize hate. To address this gap, we present HateMirage, a novel dataset of Faux Hate comments designed to advance reasoning and explainability research on hate emerging from fake or distorted narratives. The dataset was constructed by identifying widely debunked misinformation claims from fact-checking sources and tracing related YouTube discussions, resulting in 4,530 user comments. Each comment is annotated along three interpretable dimensions: Target (who is affected), Intent (the underlying motivation or goal behind the comment), and Implication (its potential social impact). Unlike prior explainability datasets such as HateXplain and HARE, which offer token-level or single-dimensional reasoning, HateMirage introduces a multi-dimensional explanation framework that captures the interplay between misinformation, harm, and social consequence. We benchmark multiple open-source language models on HateMirage using ROUGE-L F1 and Sentence-BERT similarity to assess explanation coherence. Results suggest that explanation quality may depend more on pretraining diversity and reasoning-oriented data rather than on model scale alone. By coupling misinformation reasoning with harm attribution, HateMirage establishes a new benchmark for interpretable hate detection and responsible AI research.