🤖 AI Summary
Safety filters in text-to-image (T2I) models are vulnerable to jailbreaking attacks, yet existing LLM-based approaches suffer from high query overhead and lack interpretable, guidance-driven optimization. Method: We propose Metaphor-Driven Jailbreaking Attack (MJA), the first metaphor-guided multi-agent prompting framework for T2I safety evaluation. MJA integrates metaphor retrieval, context-aware prompt matching, and adversarial prompt generation, augmented by a surrogate-model-guided adaptive optimization mechanism. Contribution/Results: Evaluated across multiple open-source and commercial T2I models, MJA achieves an average 23.6% improvement in attack success rate while reducing query count by up to 62%. Crucially, generated adversarial prompts exhibit strong cross-model transferability. MJA establishes a new paradigm for efficient, low-query-cost T2I safety assessment—offering both enhanced efficacy and interpretability over prior methods.
📝 Abstract
To mitigate misuse, text-to-image~(T2I) models commonly incorporate safety filters to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attack methods use LLMs to generate adversarial prompts that effectively bypass safety filters while generating sensitive images, revealing the safety vulnerabilities within the T2I model. However, existing LLM-based attack methods lack explicit guidance, relying on substantial queries to achieve a successful attack, which limits their practicality in real-world scenarios. In this work, we introduce extbf{MJA}, a extbf{m}etaphor-based extbf{j}ailbreaking extbf{a}ttack method inspired by the Taboo game, aiming to balance the attack effectiveness and query efficiency by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance the attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Experiments demonstrate that MJA achieves better attack effectiveness while requiring fewer queries compared to baseline methods. Moreover, our adversarial prompts exhibit strong transferability across various open-source and commercial T2I models. extcolor{red}{This paper includes model-generated content that may contain offensive or distressing material.}