Metaphor-based Jailbreaking Attacks on Text-to-Image Models

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image (T2I) models commonly employ sensitive-content defense mechanisms, yet existing jailbreaking attacks require prior knowledge of the specific defense type, severely limiting their generalizability. Method: This paper proposes a prior-free, metaphor-based jailbreaking attack—the first to draw inspiration from the “Taboo” word game—featuring a dual-module framework: a metaphor retrieval and context-matching module that generates semantically oblique prompts, and an agent model-driven adversarial prompt optimization (APO) module that enhances attack efficiency. The approach integrates collaborative multi-LLM agents to enable robust cross-defense generalization. Contribution/Results: Our method achieves state-of-the-art performance against diverse mainstream defense models, significantly outperforming six baseline methods. It attains higher attack success rates while reducing query counts by 37%–62%, demonstrating strong generalization to both unknown internal and external defenses.

Technology Category

Application Category

📝 Abstract
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attacks have shown that adversarial prompts can effectively bypass these mechanisms and induce T2I models to produce sensitive content, revealing critical safety vulnerabilities. However, existing attack methods implicitly assume that the attacker knows the type of deployed defenses, which limits their effectiveness against unknown or diverse defense mechanisms. In this work, we introduce extbf{MJA}, a extbf{m}etaphor-based extbf{j}ailbreaking extbf{a}ttack method inspired by the Taboo game, aiming to effectively and efficiently attack diverse defense mechanisms without prior knowledge of their type by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Extensive experiments on T2I models with various external and internal defense mechanisms demonstrate that MJA outperforms six baseline methods, achieving stronger attack performance while using fewer queries. Code is available in https://github.com/datar001/metaphor-based-jailbreaking-attack.
Problem

Research questions and friction points this paper is trying to address.

Bypasses text-to-image model defenses via metaphor-based prompts
Attacks diverse unknown defense mechanisms without prior knowledge
Enhances attack efficiency with surrogate model and adaptive optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metaphor-based adversarial prompts bypass diverse defenses
LLM multi-agent generation explores metaphors and contexts
Surrogate model predicts attack results for prompt optimization
🔎 Similar Papers
No similar papers found.
C
Chenyu Zhang
School of New Media and Communication, Tianjin University, Tianjin, China
Y
Yiwen Ma
School of Electrical and Information Engineering, Tianjin University, Tianjin, China
L
Lanjun Wang
School of New Media and Communication, Tianjin University, Tianjin, China
Wenhui Li
Wenhui Li
National Institute of Biological Sciences,Beijing
Yi Tu
Yi Tu
Ant Group
Computer VisionDocument UnderstandingVision Language Model
A
An-An Liu
School of Electrical and Information Engineering, Tianjin University, Tianjin, China