Defending Against Gradient Inversion Attacks for Biomedical Images via Learnable Data Perturbation

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Biomedical images in federated learning are highly vulnerable to gradient inversion attacks, while existing defense methods suffer from poor generalizability and insufficient medical-domain adaptation. Method: This paper proposes a learnable latent-space data perturbation mechanism. It employs a min-max optimization framework that jointly minimizes model utility loss and maximizes reconstruction distortion, incorporating gradient obfuscation and reconstruction constraints. Contribution/Results: To our knowledge, this is the first generalizable defense framework designed specifically for heterogeneous, population-scale medical data. Evaluated on both real-world biomedical and benchmark image datasets, it achieves ≈90% client-side classification accuracy while substantially enhancing privacy protection: attacker classification accuracy on reconstructed images drops by 12.5%, MSE between original and reconstructed images increases by over 12.4%, and model utility remains stable.

Technology Category

Application Category

📝 Abstract
The increasing need for sharing healthcare data and collaborating on clinical research has raised privacy concerns. Health information leakage due to malicious attacks can lead to serious problems such as misdiagnoses and patient identification issues. Privacy-preserving machine learning (PPML) and privacy-enhancing technologies, particularly federated learning (FL), have emerged in recent years as innovative solutions to balance privacy protection with data utility; however, they also suffer from inherent privacy vulnerabilities. Gradient inversion attacks constitute major threats to data sharing in federated learning. Researchers have proposed many defenses against gradient inversion attacks. However, current defense methods for healthcare data lack generalizability, i.e., existing solutions may not be applicable to data from a broader range of populations. In addition, most existing defense methods are tested using non-healthcare data, which raises concerns about their applicability to real-world healthcare systems. In this study, we present a defense against gradient inversion attacks in federated learning. We achieve this using latent data perturbation and minimax optimization, utilizing both general and medical image datasets. Our method is compared to two baselines, and the results show that our approach can outperform the baselines with a reduction of 12.5% in the attacker's accuracy in classifying reconstructed images. The proposed method also yields an increase of over 12.4% in Mean Squared Error (MSE) between the original and reconstructed images at the same level of model utility of around 90% client classification accuracy. The results suggest the potential of a generalizable defense for healthcare data.
Problem

Research questions and friction points this paper is trying to address.

Defending against gradient inversion attacks in federated learning
Addressing privacy vulnerabilities in healthcare data sharing
Improving generalizability of defenses for biomedical images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses learnable data perturbation for defense
Applies minimax optimization for privacy protection
Tests on general and medical image datasets
🔎 Similar Papers
No similar papers found.
S
Shiyi Jiang
Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708 USA
F
F. Firouzi
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85281 USA
Krishnendu Chakrabarty
Krishnendu Chakrabarty
Fulton Professor of Microelectronics, School of Electrical and Computer and Energy Engineering
Electronic design automationTesting and Design-for-TestabilityMicrofluidicsComputer EngineeringSensor Networks