JPU: Bridging Jailbreak Defense and Unlearning via On-Policy Path Rectification

πŸ“… 2026-01-06
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the persistent vulnerability of large language models to implicit jailbreaking pathways, which can emerge even after safety alignment or parameter-erasure-based machine unlearning due to dynamic reconfiguration of intermediate-layer parameters. To counter this, the paper introduces JPUβ€”a novel approach that integrates jailbreak defense with machine unlearning by dynamically generating adversarial examples during inference to probe and identify emergent jailbreak trajectories in real time. These detected pathways are then actively redirected toward predefined safe anchors. By moving beyond conventional static erasure paradigms, JPU enables continuous, online correction of unsafe model behaviors while preserving the model’s original capabilities and performance, thereby significantly enhancing robustness against dynamic jailbreak attacks.

Technology Category

Application Category

πŸ“ Abstract
Despite extensive safety alignment, Large Language Models (LLMs) often fail against jailbreak attacks. While machine unlearning has emerged as a promising defense by erasing specific harmful parameters, current methods remain vulnerable to diverse jailbreaks. We first conduct an empirical study and discover that this failure mechanism is caused by jailbreaks primarily activating non-erased parameters in the intermediate layers. Further, by probing the underlying mechanism through which these circumvented parameters reassemble into the prohibited output, we verify the persistent existence of dynamic $\textbf{jailbreak paths}$ and show that the inability to rectify them constitutes the fundamental gap in existing unlearning defenses. To bridge this gap, we propose $\textbf{J}$ailbreak $\textbf{P}$ath $\textbf{U}$nlearning (JPU), which is the first to rectify dynamic jailbreak paths towards safety anchors by dynamically mining on-policy adversarial samples to expose vulnerabilities and identify jailbreak paths. Extensive experiments demonstrate that JPU significantly enhances jailbreak resistance against dynamic attacks while preserving the model's utility.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
machine unlearning
dynamic jailbreak paths
safety alignment
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak defense
machine unlearning
on-policy adversarial sampling
jailbreak path rectification
large language model safety
πŸ”Ž Similar Papers
No similar papers found.
Xi Wang
Xi Wang
University of Defense Technology
LLM SafetyJailbreakSafety Alignment
Songlei Jian
Songlei Jian
NUDT
representation learningmachine learningdata science
S
Shasha Li
National University of Defense Technology
X
Xiaopeng Li
National University of Defense Technology
Z
Zhaoye Li
National University of Defense Technology
B
Bing Ji
National University of Defense Technology
B
Baosheng Wang
National University of Defense Technology
J
Jie Yu
National University of Defense Technology