🤖 AI Summary
Existing large language model (LLM) unlearning methods suffer from severe robustness deficiencies: optimizing a single-point loss drives models toward sharp minima in parameter space, rendering them vulnerable to relearning attacks that recover “unlearned” knowledge. This work first identifies and characterizes this fragility mechanism. We propose a neighborhood-aware, multi-point robust unlearning framework that employs a bilevel feedback-guided optimization strategy. It jointly models forgetting and retention feedback under adversarial perturbations and enforces gradient projection constraints to collaboratively search—within a local parameter neighborhood—for flat, stable, and utility-preserving unlearning solutions. Evaluated on the WMDP and MUSE benchmarks, our method significantly enhances resilience against both relearning and jailbreaking attacks, improving unlearning robustness by up to 2.3× over prior approaches, while maintaining language modeling and task performance on par with state-of-the-art methods.
📝 Abstract
Current LLM unlearning methods face a critical security vulnerability that undermines their fundamental purpose: while they appear to successfully remove sensitive or harmful knowledge, this ``forgotten" information remains precariously recoverable through relearning attacks. We identify that the root cause is that conventional methods optimizing the forgetting loss at individual data points will drive model parameters toward sharp minima in the loss landscape. In these unstable regions, even minimal parameter perturbations can drastically alter the model's behaviors. Consequently, relearning attacks exploit this vulnerability by using just a few fine-tuning samples to navigate the steep gradients surrounding these unstable regions, thereby rapidly recovering knowledge that was supposedly erased. This exposes a critical robustness gap between apparent unlearning and actual knowledge removal. To address this issue, we propose StableUN, a bi-level feedback-guided optimization framework that explicitly seeks more stable parameter regions via neighborhood-aware optimization. It integrates forgetting feedback, which uses adversarial perturbations to probe parameter neighborhoods, with remembering feedback to preserve model utility, aligning the two objectives through gradient projection. Experiments on WMDP and MUSE benchmarks demonstrate that our method is significantly more robust against both relearning and jailbreaking attacks while maintaining competitive utility performance.