Metaphor-based Jailbreaking Attacks on Text-to-Image Models

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Safety filters in text-to-image (T2I) models are vulnerable to jailbreaking attacks, yet existing LLM-based approaches suffer from high query overhead and lack interpretable, guidance-driven optimization. Method: We propose Metaphor-Driven Jailbreaking Attack (MJA), the first metaphor-guided multi-agent prompting framework for T2I safety evaluation. MJA integrates metaphor retrieval, context-aware prompt matching, and adversarial prompt generation, augmented by a surrogate-model-guided adaptive optimization mechanism. Contribution/Results: Evaluated across multiple open-source and commercial T2I models, MJA achieves an average 23.6% improvement in attack success rate while reducing query count by up to 62%. Crucially, generated adversarial prompts exhibit strong cross-model transferability. MJA establishes a new paradigm for efficient, low-query-cost T2I safety assessment—offering both enhanced efficacy and interpretability over prior methods.

Technology Category

Application Category

📝 Abstract
To mitigate misuse, text-to-image~(T2I) models commonly incorporate safety filters to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attack methods use LLMs to generate adversarial prompts that effectively bypass safety filters while generating sensitive images, revealing the safety vulnerabilities within the T2I model. However, existing LLM-based attack methods lack explicit guidance, relying on substantial queries to achieve a successful attack, which limits their practicality in real-world scenarios. In this work, we introduce extbf{MJA}, a extbf{m}etaphor-based extbf{j}ailbreaking extbf{a}ttack method inspired by the Taboo game, aiming to balance the attack effectiveness and query efficiency by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance the attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Experiments demonstrate that MJA achieves better attack effectiveness while requiring fewer queries compared to baseline methods. Moreover, our adversarial prompts exhibit strong transferability across various open-source and commercial T2I models. extcolor{red}{This paper includes model-generated content that may contain offensive or distressing material.}
Problem

Research questions and friction points this paper is trying to address.

Bypassing safety filters in text-to-image models
Improving efficiency of jailbreaking attack methods
Enhancing transferability of adversarial prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metaphor-based adversarial prompts generation
LLM multi-agent coordination for diverse prompts
Surrogate model predicts optimal attack prompts
C
Chenyu Zhang
School of New Media and Communication, Tianjin University, Tianjin, China
Y
Yiwen Ma
School of Electrical and Information Engineering, Tianjin University, Tianjin, China
L
Lanjun Wang
School of New Media and Communication, Tianjin University, Tianjin, China
Wenhui Li
Wenhui Li
National Institute of Biological Sciences,Beijing
Yi Tu
Yi Tu
Ant Group
Computer VisionDocument UnderstandingVision Language Model
A
An-An Liu
School of Electrical and Information Engineering, Tianjin University, Tianjin, China