🤖 AI Summary
This work exposes a critical security vulnerability in multimodal retrieval-augmented generation (MLLM-RAG) systems—specifically, during external knowledge injection—and proposes the first multimodal knowledge poisoning attack framework. Leveraging the tight coupling between retrieval and generation in RAG, we design a dual-path poisoning strategy: Local Poisoning Attack (LPA) for query-level targeted misdirection, and Global Poisoning Attack (GPA) to induce system-wide generation collapse. Our method integrates text-image multimodal misleading knowledge injection, retrieval result tampering, cross-modal semantic alignment disruption, and dynamic response induction. Experiments show LPA achieves up to 56% attack success rate on MultiModalQA; GPA reduces model accuracy to zero with only a single injection of irrelevant knowledge. This is the first systematic study to uncover security risks across the multimodal RAG knowledge supply chain, establishing a foundational benchmark for future defense research.
📝 Abstract
Multimodal large language models (MLLMs) equipped with Retrieval Augmented Generation (RAG) leverage both their rich parametric knowledge and the dynamic, external knowledge to excel in tasks such as Question Answering. While RAG enhances MLLMs by grounding responses in query-relevant external knowledge, this reliance poses a critical yet underexplored safety risk: knowledge poisoning attacks, where misinformation or irrelevant knowledge is intentionally injected into external knowledge bases to manipulate model outputs to be incorrect and even harmful. To expose such vulnerabilities in multimodal RAG, we propose MM-PoisonRAG, a novel knowledge poisoning attack framework with two attack strategies: Localized Poisoning Attack (LPA), which injects query-specific misinformation in both text and images for targeted manipulation, and Globalized Poisoning Attack (GPA) to provide false guidance during MLLM generation to elicit nonsensical responses across all queries. We evaluate our attacks across multiple tasks, models, and access settings, demonstrating that LPA successfully manipulates the MLLM to generate attacker-controlled answers, with a success rate of up to 56% on MultiModalQA. Moreover, GPA completely disrupts model generation to 0% accuracy with just a single irrelevant knowledge injection. Our results highlight the urgent need for robust defenses against knowledge poisoning to safeguard multimodal RAG frameworks.