NoT: Federated Unlearning via Weight Negation

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, efficiently enabling participant removal without accessing raw data or auxiliary storage poses a critical privacy-compliance challenge. This paper proposes the first weight-inversion (×−1)-based federated unlearning mechanism, theoretically proven to disrupt inter-layer co-adaptation while ensuring both thorough unlearning and efficient model recovery. Our method requires neither historical gradients, local models, nor the to-be-removed data; it achieves unlearning via a single round of lightweight perturbation and trajectory analysis, making it natively compatible with mainstream federated frameworks. Experiments across three benchmark datasets and three model architectures demonstrate that our approach improves unlearning success rate by 12.6%–28.4%, reduces communication overhead by 57%–73%, and cuts computational cost by 41%–69% over state-of-the-art baselines—substantially outperforming existing solutions.

Technology Category

Application Category

📝 Abstract
Federated unlearning (FU) aims to remove a participant's data contributions from a trained federated learning (FL) model, ensuring privacy and regulatory compliance. Traditional FU methods often depend on auxiliary storage on either the client or server side or require direct access to the data targeted for removal-a dependency that may not be feasible if the data is no longer available. To overcome these limitations, we propose NoT, a novel and efficient FU algorithm based on weight negation (multiplying by -1), which circumvents the need for additional storage and access to the target data. We argue that effective and efficient unlearning can be achieved by perturbing model parameters away from the set of optimal parameters, yet being well-positioned for quick re-optimization. This technique, though seemingly contradictory, is theoretically grounded: we prove that the weight negation perturbation effectively disrupts inter-layer co-adaptation, inducing unlearning while preserving an approximate optimality property, thereby enabling rapid recovery. Experimental results across three datasets and three model architectures demonstrate that NoT significantly outperforms existing baselines in unlearning efficacy as well as in communication and computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Federated unlearning removes participant data from FL models.
NoT algorithm uses weight negation for efficient unlearning.
NoT outperforms baselines in efficacy and efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weight negation for federated unlearning
No additional storage or data access needed
Perturbs model parameters for quick re-optimization
🔎 Similar Papers
No similar papers found.
Yasser H. Khalil
Yasser H. Khalil
Noah's Ark Lab, Canada
Agentic AITransfer LearningFederated LearningMachine UnlearningAutonomous Driving
L
L. Brunswic
Huawei Noah’s Ark Lab, Montreal, Canada
S
Soufiane Lamghari
Huawei Noah’s Ark Lab, Montreal, Canada
X
Xu Li
Huawei Technologies Canada Inc., Ottawa, Canada
Mahdi Beitollahi
Mahdi Beitollahi
Noah's Ark Lab Montreal
Distributed LearningLLMsDifferential PrivacyFederated Learning
X
Xi Chen
Huawei Noah’s Ark Lab, Montreal, Canada