🤖 AI Summary
Existing jailbreaking attacks suffer from low efficiency and poor reproducibility due to reliance on white-box access, manual crafting, or stochastic search (e.g., genetic algorithms). This work introduces deep reinforcement learning (DRL) to black-box jailbreaking for the first time, formulating it as a sequential prompt-search problem optimized for eliciting harmful responses. We propose a customized multi-objective reward function and an enhanced Proximal Policy Optimization (PPO) algorithm, substantially reducing exploration randomness while improving attack directionality and stability. Our method requires no internal model access and supports cross-model transfer. Evaluated on six state-of-the-art LLMs, it achieves significantly higher success rates than prior approaches. It further demonstrates robustness against three major defense paradigms—including input sanitization, output filtering, and reinforcement learning from human feedback (RLHF)—and exhibits strong generalization capability across unseen models and prompts.
📝 Abstract
Recent studies developed jailbreaking attacks, which construct jailbreaking prompts to fool LLMs into responding to harmful questions. Early-stage jailbreaking attacks require access to model internals or significant human efforts. More advanced attacks utilize genetic algorithms for automatic and black-box attacks. However, the random nature of genetic algorithms significantly limits the effectiveness of these attacks. In this paper, we propose RLbreaker, a black-box jailbreaking attack driven by deep reinforcement learning (DRL). We model jailbreaking as a search problem and design an RL agent to guide the search, which is more effective and has less randomness than stochastic search, such as genetic algorithms. Specifically, we design a customized DRL system for the jailbreaking problem, including a novel reward function and a customized proximal policy optimization (PPO) algorithm. Through extensive experiments, we demonstrate that RLbreaker is much more effective than existing jailbreaking attacks against six state-of-the-art (SOTA) LLMs. We also show that RLbreaker is robust against three SOTA defenses and its trained agents can transfer across different LLMs. We further validate the key design choices of RLbreaker via a comprehensive ablation study.