Tackling Federated Unlearning as a Parameter Estimation Problem

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), data residency at clients poses significant challenges for efficient, privacy-compliant data unlearning. This paper introduces the first information-theoretic federated unlearning framework, formulating unlearning as a parameter estimation problem. Leveraging Hessian-based second-order curvature information, it precisely identifies and selectively resets sensitive model parameters; lightweight federated fine-tuning then enables rapid, localized unlearning without server access to raw client data. The method supports both class-level and client-level unlearning, is model-agnostic, and seamlessly integrates with standard FL pipelines. Experiments demonstrate that membership inference attack (MIA) success rates drop to near-random levels (~0.5), normalized accuracy remains high (~0.9), and both class-specific knowledge and backdoor triggers are effectively erased—substantially enhancing model privacy and integrity.

Technology Category

Application Category

📝 Abstract
Privacy regulations require the erasure of data from deep learning models. This is a significant challenge that is amplified in Federated Learning, where data remains on clients, making full retraining or coordinated updates often infeasible. This work introduces an efficient Federated Unlearning framework based on information theory, modeling leakage as a parameter estimation problem. Our method uses second-order Hessian information to identify and selectively reset only the parameters most sensitive to the data being forgotten, followed by minimal federated retraining. This model-agnostic approach supports categorical and client unlearning without requiring server access to raw client data after initial information aggregation. Evaluations on benchmark datasets demonstrate strong privacy (MIA success near random, categorical knowledge erased) and high performance (Normalized Accuracy against re-trained benchmarks of $approx$ 0.9), while aiming for increased efficiency over complete retraining. Furthermore, in a targeted backdoor attack scenario, our framework effectively neutralizes the malicious trigger, restoring model integrity. This offers a practical solution for data forgetting in FL.
Problem

Research questions and friction points this paper is trying to address.

Efficiently erase client data from federated learning models
Selectively reset parameters sensitive to forgotten data
Maintain model performance while ensuring privacy compliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modeling unlearning as parameter estimation problem
Using second-order Hessian to identify sensitive parameters
Selective reset with minimal federated retraining
🔎 Similar Papers
No similar papers found.
A
Antonio Balordi
CASD - Italian Defense University, Rome, Italy
L
Lorenzo Manini
DIFA - University of Bologna, Bologna, Italy
F
Fabio Stella
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy
Alessio Merlo
Alessio Merlo
Professor of Cybersecurity, Director of the School of Advanced Defense Studies (CASD), Rome, Italy
Computer SecurityMobile Security