Forewarned is Forearmed: Pre-Synthesizing Jailbreak-like Instructions to Enhance LLM Safety Guardrail to Potential Attacks

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) suffer from poor generalization against novel jailbreaking attacks due to distributional shift between safety-aligned training data and real-world adversarial inputs, resulting in a reactive patching paradigm. To address this, we propose IMAGINE, the first framework that synthesizes jailbreak-like instructions iteratively via embedding-space distribution analysis. By modeling the evolutionary dynamics of text generation distributions, IMAGINE dynamically expands the coverage of safety training data, shifting the paradigm from passive mitigation to proactive defense. Extensive experiments on Qwen2.5, Llama3.1, and Llama3.2 demonstrate that IMAGINE significantly reduces success rates across diverse jailbreaking attack families—without compromising the model’s original utility or task performance.

Technology Category

Application Category

📝 Abstract
Despite advances in improving large language model(LLM) to refuse to answer malicious instructions, widely used LLMs remain vulnerable to jailbreak attacks where attackers generate instructions with distributions differing from safety alignment corpora. New attacks expose LLMs' inability to recognize unseen malicious instructions, highlighting a critical distributional mismatch between training data and real-world attacks that forces developers into reactive patching cycles. To tackle this challenge, we propose IMAGINE, a synthesis framework that leverages embedding space distribution analysis to generate jailbreak-like instructions. This approach effectively fills the distributional gap between authentic jailbreak patterns and safety alignment corpora. IMAGINE follows an iterative optimization process that dynamically evolves text generation distributions across iterations, thereby augmenting the coverage of safety alignment data distributions through synthesized data examples. Based on the safety-aligned corpus enhanced through IMAGINE, our framework demonstrates significant decreases in attack success rate on Qwen2.5, Llama3.1, and Llama3.2 without compromising their utility.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM vulnerability to unseen jailbreak attack instructions
Bridging distributional gap between training data and real-world attacks
Reducing attack success rates without compromising model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding space analysis generates jailbreak-like instructions
Iterative optimization evolves text generation distributions
Augments safety alignment data with synthesized examples
🔎 Similar Papers
No similar papers found.
S
Sheng Liu
Media Synthesis and Forensics Lab, Institute of Computing Technology, Chinese Academy of Sciences
Qiang Sheng
Qiang Sheng
Chinese Academy of Sciences
fake news detectionfact checkingLLM safety
Danding Wang
Danding Wang
Institute of Computing Technology, Chinese Academy of Sciences
Explainable AIMedia ForensicsHuman-Computer Interaction
Y
Yang Li
Media Synthesis and Forensics Lab, Institute of Computing Technology, Chinese Academy of Sciences
G
Guang Yang
Zhongguancun Laboratory
Juan Cao
Juan Cao
Professor of Mathematics, Xiamen University
Computer Aided Geometric DesignComputer Graphics