🤖 AI Summary
This work reveals a critical privacy vulnerability in parameter-efficient fine-tuning (PEFT) within federated learning (FL): adversaries can reconstruct users’ private fine-tuning images with high fidelity solely from uploaded adapter gradients. To address this, we propose and implement the first systematic gradient inversion attack specifically targeting PEFT adapters—departing from conventional threats that rely on full-model gradients. Our method integrates controllable pre-trained model injection, adapter-gradient feature analysis, and multi-step optimization-based reconstruction. Evaluated across multiple vision tasks, it successfully reconstructs hundreds of images with PSNR exceeding 28 dB. These results expose an underappreciated data leakage risk inherent to PEFT in FL settings, challenging the assumption of enhanced privacy through parameter efficiency. The study provides both empirical evidence and actionable insights for designing privacy-preserving PEFT frameworks in federated systems.
📝 Abstract
Federated learning (FL) allows multiple data-owners to collaboratively train machine learning models by exchanging local gradients, while keeping their private data on-device. To simultaneously enhance privacy and training efficiency, recently parameter-efficient fine-tuning (PEFT) of large-scale pretrained models has gained substantial attention in FL. While keeping a pretrained (backbone) model frozen, each user fine-tunes only a few lightweight modules to be used in conjunction, to fit specific downstream applications. Accordingly, only the gradients with respect to these lightweight modules are shared with the server. In this work, we investigate how the privacy of the fine-tuning data of the users can be compromised via a malicious design of the pretrained model and trainable adapter modules. We demonstrate gradient inversion attacks on a popular PEFT mechanism, the adapter, which allow an attacker to reconstruct local data samples of a target user, using only the accessible adapter gradients. Via extensive experiments, we demonstrate that a large batch of fine-tuning images can be retrieved with high fidelity. Our attack highlights the need for privacy-preserving mechanisms for PEFT, while opening up several future directions. Our code is available at https://github.com/info-ucr/PEFTLeak.