DRAGD: A Federated Unlearning Data Reconstruction Attack Based on Gradient Differences

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While federated unlearning mechanisms support client data deletion, their gradient exchange process may inadvertently leak sensitive deleted data. This work first reveals that the difference between global model gradients before and after unlearning can be exploited to reconstruct original training samples—posing a novel privacy threat. Method: We propose DRAGD, a gradient-difference-based attack that reconstructs data via analytical gradient difference analysis and reverse optimization. To enhance reconstruction fidelity—especially for complex modalities like faces—we extend DRAGD into DRAGDP, incorporating prior knowledge guidance and public auxiliary data. Contribution/Results: Extensive experiments on CIFAR-10, CelebA, and other benchmarks demonstrate that DRAGDP significantly outperforms existing attacks in both reconstruction quality and privacy leakage capability. Our findings systematically expose a critical, previously overlooked security vulnerability in federated unlearning mechanisms.

Technology Category

Application Category

📝 Abstract
Federated learning enables collaborative machine learning while preserving data privacy. However, the rise of federated unlearning, designed to allow clients to erase their data from the global model, introduces new privacy concerns. Specifically, the gradient exchanges during the unlearning process can leak sensitive information about deleted data. In this paper, we introduce DRAGD, a novel attack that exploits gradient discrepancies before and after unlearning to reconstruct forgotten data. We also present DRAGDP, an enhanced version of DRAGD that leverages publicly available prior data to improve reconstruction accuracy, particularly for complex datasets like facial images. Extensive experiments across multiple datasets demonstrate that DRAGD and DRAGDP significantly outperform existing methods in data reconstruction.Our work highlights a critical privacy vulnerability in federated unlearning and offers a practical solution, advancing the security of federated unlearning systems in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Exploits gradient discrepancies to reconstruct deleted data
Enhances reconstruction accuracy using public prior data
Highlights privacy vulnerabilities in federated unlearning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits gradient discrepancies for data reconstruction
Leverages public prior data for enhanced accuracy
Outperforms existing methods in reconstruction attacks
🔎 Similar Papers
No similar papers found.
B
Bocheng Ju
School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
J
Junchao Fan
School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
J
Jiaqi Liu
School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
Xiaolin Chang
Xiaolin Chang
Beijing Jiaotong University
dependable and secure computing