🤖 AI Summary
Text-to-image (T2I) models commonly employ sensitive-content defense mechanisms, yet existing jailbreaking attacks require prior knowledge of the specific defense type, severely limiting their generalizability.
Method: This paper proposes a prior-free, metaphor-based jailbreaking attack—the first to draw inspiration from the “Taboo” word game—featuring a dual-module framework: a metaphor retrieval and context-matching module that generates semantically oblique prompts, and an agent model-driven adversarial prompt optimization (APO) module that enhances attack efficiency. The approach integrates collaborative multi-LLM agents to enable robust cross-defense generalization.
Contribution/Results: Our method achieves state-of-the-art performance against diverse mainstream defense models, significantly outperforming six baseline methods. It attains higher attack success rates while reducing query counts by 37%–62%, demonstrating strong generalization to both unknown internal and external defenses.
📝 Abstract
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attacks have shown that adversarial prompts can effectively bypass these mechanisms and induce T2I models to produce sensitive content, revealing critical safety vulnerabilities. However, existing attack methods implicitly assume that the attacker knows the type of deployed defenses, which limits their effectiveness against unknown or diverse defense mechanisms. In this work, we introduce extbf{MJA}, a extbf{m}etaphor-based extbf{j}ailbreaking extbf{a}ttack method inspired by the Taboo game, aiming to effectively and efficiently attack diverse defense mechanisms without prior knowledge of their type by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Extensive experiments on T2I models with various external and internal defense mechanisms demonstrate that MJA outperforms six baseline methods, achieving stronger attack performance while using fewer queries. Code is available in https://github.com/datar001/metaphor-based-jailbreaking-attack.