๐ค AI Summary
Detecting implicit hate videos in short-video platforms remains challenging due to their subtle semantic cues, weak cross-modal alignments, and critical temporal dynamics. To address this, we propose CMFusionโa novel multimodal fusion model featuring dual-path (channel-level and modality-level) integration. CMFusion jointly models text, audio, and visual modalities via video-audio temporal cross-attention, channel-wise recalibration, and modality-gated fusion. Unlike prevailing unimodal or shallow multimodal approaches, it explicitly captures intra-modal temporal evolution and fine-grained inter-modal interactions. On a real-world dataset, CMFusion consistently outperforms five strong baselines across accuracy, precision, recall, and F1-score. Ablation studies quantitatively validate the contribution of each component. This work establishes an interpretable and robust multimodal paradigm for implicit hate content detection, advancing both methodology and practical deployment.
๐ Abstract
The rapid rise of video content on platforms such as TikTok and YouTube has transformed information dissemination, but it has also facilitated the spread of harmful content, particularly hate videos. Despite significant efforts to combat hate speech, detecting these videos remains challenging due to their often implicit nature. Current detection methods primarily rely on unimodal approaches, which inadequately capture the complementary features across different modalities. While multimodal techniques offer a broader perspective, many fail to effectively integrate temporal dynamics and modality-wise interactions essential for identifying nuanced hate content. In this paper, we present CMFusion, an enhanced multimodal hate video detection model utilizing a novel Channel-wise and Modality-wise Fusion Mechanism. CMFusion first extracts features from text, audio, and video modalities using pre-trained models and then incorporates a temporal cross-attention mechanism to capture dependencies between video and audio streams. The learned features are then processed by channel-wise and modality-wise fusion modules to obtain informative representations of videos. Our extensive experiments on a real-world dataset demonstrate that CMFusion significantly outperforms five widely used baselines in terms of accuracy, precision, recall, and F1 score. Comprehensive ablation studies and parameter analyses further validate our design choices, highlighting the modelโs effectiveness in detecting hate videos. The source codes will be made publicly available at https://github.com/EvelynZ10/cmfusion.