A Robust Certified Machine Unlearning Method Under Distribution Shift

📅 2026-01-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing certified unlearning methods under non-i.i.d. deletion requests, which arises from distributional shifts. To tackle this challenge, we propose the first distribution-aware certified unlearning framework tailored for non-i.i.d. deletion scenarios. Our approach leverages iterative Newton updates constrained within a trust region to effectively approximate the fully retrained model, thereby achieving efficient (ε, δ)-certified unlearning. By deriving a tighter pre-run bound on the gradient residual, the method substantially mitigates the adverse effects induced by distributional shift. Extensive experiments demonstrate that our framework consistently outperforms current certified unlearning techniques across multiple evaluation metrics in settings with distributional shift.

Technology Category

Application Category

📝 Abstract
The Newton method has been widely adopted to achieve certified unlearning. A critical assumption in existing approaches is that the data requested for unlearning are selected i.i.d.(independent and identically distributed). However,the problem of certified unlearning under non-i.i.d. deletions remains largely unexplored. In practice, unlearning requests are inherently biased, leading to non-i.i.d. deletions and causing distribution shifts between the original and retained datasets. In this paper, we show that certified unlearning with the Newton method becomes inefficient and ineffective under non-i.i.d. unlearning sets. We then propose a better certified unlearning approach by performing a distribution-aware certified unlearning framework based on iterative Newton updates constrained by a trust region. Our method provides a closer approximation to the retrained model and yields a tighter pre-run bound on the gradient residual, thereby ensuring efficient (epsilon, delta)-certified unlearning. To demonstrate its practical effectiveness under distribution shift, we also conduct extensive experiments across multiple evaluation metrics, providing a comprehensive assessment of our approach.
Problem

Research questions and friction points this paper is trying to address.

certified machine unlearning
distribution shift
non-i.i.d. deletions
Newton method
data bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

certified machine unlearning
distribution shift
non-i.i.d. deletion
trust region
Newton method
🔎 Similar Papers
J
Jinduo Guo
Department of Computer Science, Johns Hopkins University
Yinzhi Cao
Yinzhi Cao
Johns Hopkins University
Computer Security