DualOptim: Enhancing Efficacy and Stability in Machine Unlearning with Dual Optimizers

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine unlearning (MU) methods exhibit high sensitivity to hyperparameters, resulting in poor stability and substantial performance variance across diverse deployment scenarios. To address this, we propose DualOptim—a novel dual-optimizer framework that pioneers a decoupled momentum–adaptive learning rate coordination mechanism. Specifically, DualOptim employs two parallel optimization paths: one dedicated to minimizing the forgetting objective, and the other to preserving model fidelity; additionally, it introduces a gradient reweighting strategy to amplify forgetting strength on critical samples. Theoretical analysis and extensive experiments across multiple tasks—including image classification, generative modeling, and large language models—demonstrate that DualOptim achieves an average 12.7% improvement in forgetting accuracy while maintaining model utility. Moreover, it reduces performance variance across datasets and architectures by 63%, significantly enhancing the robustness and generalizability of MU.

Technology Category

Application Category

📝 Abstract
Existing machine unlearning (MU) approaches exhibit significant sensitivity to hyperparameters, requiring meticulous tuning that limits practical deployment. In this work, we first empirically demonstrate the instability and suboptimal performance of existing popular MU methods when deployed in different scenarios. To address this issue, we propose Dual Optimizer (DualOptim), which incorporates adaptive learning rate and decoupled momentum factors. Empirical and theoretical evidence demonstrates that DualOptim contributes to effective and stable unlearning. Through extensive experiments, we show that DualOptim can significantly boost MU efficacy and stability across diverse tasks, including image classification, image generation, and large language models, making it a versatile approach to empower existing MU algorithms.
Problem

Research questions and friction points this paper is trying to address.

Existing machine unlearning methods are overly sensitive to hyperparameters.
Current approaches show instability and suboptimal performance in diverse scenarios.
Proposes DualOptim to enhance efficacy and stability in machine unlearning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual optimizer with adaptive learning rate
Decoupled momentum factors for stability
Versatile across diverse machine learning tasks
🔎 Similar Papers
No similar papers found.
Xuyang Zhong
Xuyang Zhong
City University of Hong Kong
Deep learning
H
Haochen Luo
City University of Hong Kong, Hong Kong, China
C
Chen Liu
City University of Hong Kong, Hong Kong, China