LADLE-MM: Limited Annotation based Detector with Learned Ensembles for Multimodal Misinformation

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenges of high computational overhead and scarce labeled data in multimodal misinformation detection, this paper proposes a lightweight and efficient framework. Methodologically, it employs a dual unimodal encoder architecture, freezing BLIP-generated multimodal embeddings as a fixed reference space to circumvent end-to-end joint training. It introduces the novel “model-soup” parameter initialization strategy and incorporates contrastive learning–driven cross-modal representation alignment. Crucially, the method achieves strong generalization and robustness against unimodal bias without requiring grounded annotations. Experiments demonstrate that our approach reduces parameter count by 60.3% while achieving state-of-the-art performance on the DGM4 benchmark. Moreover, it outperforms existing methods—many of which rely on large vision-language models—on the VERITE open-set benchmark, confirming its effectiveness and scalability under realistic, data-constrained settings.

Technology Category

Application Category

📝 Abstract
With the rise of easily accessible tools for generating and manipulating multimedia content, realistic synthetic alterations to digital media have become a widespread threat, often involving manipulations across multiple modalities simultaneously. Recently, such techniques have been increasingly employed to distort narratives of important events and to spread misinformation on social media, prompting the development of misinformation detectors. In the context of misinformation conveyed through image-text pairs, several detection methods have been proposed. However, these approaches typically rely on computationally intensive architectures or require large amounts of annotated data. In this work we introduce LADLE-MM: Limited Annotation based Detector with Learned Ensembles for Multimodal Misinformation, a model-soup initialized multimodal misinformation detector designed to operate under a limited annotation setup and constrained training resources. LADLE-MM is composed of two unimodal branches and a third multimodal one that enhances image and text representations with additional multimodal embeddings extracted from BLIP, serving as fixed reference space. Despite using 60.3% fewer trainable parameters than previous state-of-the-art models, LADLE-MM achieves competitive performance on both binary and multi-label classification tasks on the DGM4 benchmark, outperforming existing methods when trained without grounding annotations. Moreover, when evaluated on the VERITE dataset, LADLE-MM outperforms current state-of-the-art approaches that utilize more complex architectures involving Large Vision-Language-Models, demonstrating the effective generalization ability in an open-set setting and strong robustness to unimodal bias.
Problem

Research questions and friction points this paper is trying to address.

Detects multimodal misinformation with limited annotated data
Reduces computational cost and trainable parameters significantly
Enhances generalization and robustness against unimodal bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Limited annotation setup with model-soup initialization
Multimodal embeddings from BLIP as fixed reference space
Fewer trainable parameters than previous state-of-the-art models
🔎 Similar Papers
No similar papers found.
D
Daniele Cardullo
Sapienza University of Rome
S
Simone Teglia
Sapienza University of Rome
Irene Amerini
Irene Amerini
Sapienza Università di Roma, Italy
Multimedia forensics and security