🤖 AI Summary
While federated unlearning mechanisms support client data deletion, their gradient exchange process may inadvertently leak sensitive deleted data. This work first reveals that the difference between global model gradients before and after unlearning can be exploited to reconstruct original training samples—posing a novel privacy threat. Method: We propose DRAGD, a gradient-difference-based attack that reconstructs data via analytical gradient difference analysis and reverse optimization. To enhance reconstruction fidelity—especially for complex modalities like faces—we extend DRAGD into DRAGDP, incorporating prior knowledge guidance and public auxiliary data. Contribution/Results: Extensive experiments on CIFAR-10, CelebA, and other benchmarks demonstrate that DRAGDP significantly outperforms existing attacks in both reconstruction quality and privacy leakage capability. Our findings systematically expose a critical, previously overlooked security vulnerability in federated unlearning mechanisms.
📝 Abstract
Federated learning enables collaborative machine learning while preserving data privacy. However, the rise of federated unlearning, designed to allow clients to erase their data from the global model, introduces new privacy concerns. Specifically, the gradient exchanges during the unlearning process can leak sensitive information about deleted data. In this paper, we introduce DRAGD, a novel attack that exploits gradient discrepancies before and after unlearning to reconstruct forgotten data. We also present DRAGDP, an enhanced version of DRAGD that leverages publicly available prior data to improve reconstruction accuracy, particularly for complex datasets like facial images. Extensive experiments across multiple datasets demonstrate that DRAGD and DRAGDP significantly outperform existing methods in data reconstruction.Our work highlights a critical privacy vulnerability in federated unlearning and offers a practical solution, advancing the security of federated unlearning systems in real-world applications.