๐ค AI Summary
This work addresses the vulnerability of multimodal large language models (MLLMs) to universal jailbreaking attacks under image-text co-inputs. We propose the first unified, cross-model multimodal jailbreaking framework. Our method is the first to systematically expose image-text modality interaction as a critical security weakness, leveraging gradient-driven iterative optimization, cross-model transfer learning, and joint adversarial text suffixes with controllable image perturbations to generate highly transferable jailbreaking inputs. Evaluated on mainstream MLLMsโincluding LLaVA, Yi-VL, and MiniGPT-4โthe framework achieves high attack success rates and output quality, revealing significant defensive blind spots in current alignment mechanisms against multimodal threats. Our core contributions are: (1) establishing the multimodal collaborative vulnerability paradigm; and (2) providing the first reproducible, architecture-agnostic benchmark for universal multimodal jailbreaking attacks.
๐ Abstract
Large Language Models (LLMs) have evolved into Multimodal Large Language Models (MLLMs), significantly enhancing their capabilities by integrating visual information and other types, thus aligning more closely with the nature of human intelligence, which processes a variety of data forms beyond just text. Despite advancements, the undesirable generation of these models remains a critical concern, particularly due to vulnerabilities exposed by text-based jailbreak attacks, which have represented a significant threat by challenging existing safety protocols. Motivated by the unique security risks posed by the integration of new and old modalities for MLLMs, we propose a unified multimodal universal jailbreak attack framework that leverages iterative image-text interactions and transfer-based strategy to generate a universal adversarial suffix and image. Our work not only highlights the interaction of image-text modalities can be used as a critical vulnerability but also validates that multimodal universal jailbreak attacks can bring higher-quality undesirable generations across different MLLMs. We evaluate the undesirable context generation of MLLMs like LLaVA, Yi-VL, MiniGPT4, MiniGPT-v2, and InstructBLIP, and reveal significant multimodal safety alignment issues, highlighting the inadequacy of current safety mechanisms against sophisticated multimodal attacks. This study underscores the urgent need for robust safety measures in MLLMs, advocating for a comprehensive review and enhancement of security protocols to mitigate potential risks associated with multimodal capabilities.