A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in deploying large language models (LLMs) within federated learning (FL)—including high computational overhead, statistical heterogeneity, communication bottlenecks, and privacy risks—by systematically investigating the integration of parameter-efficient fine-tuning (PEFT) with FL. We propose the first unified taxonomy of PEFT methods for FL, categorizing them into Additive, Selective, and Reparameterized families, and analyze their intrinsic mechanisms for mitigating data heterogeneity, compressing communication, and enhancing privacy preservation. By unifying representative PEFT techniques (e.g., LoRA, Adapters, Prompt Tuning) with distributed optimization strategies—including federated aggregation, client-side pruning, and gradient compression—we construct a comprehensive PEFT-FL framework evaluated across NLP and CV tasks. Experiments characterize fundamental trade-offs among model performance, system efficiency, and privacy guarantees. Our work provides both theoretical foundations and practical guidelines for lightweight, privacy-aware LLM deployment in FL, and identifies future directions including scalable modeling, convergence analysis, and green (energy-efficient) federated learning.

Technology Category

Application Category

📝 Abstract
Foundation models have revolutionized artificial intelligence by providing robust, versatile architectures pre-trained on large-scale datasets. However, adapting these massive models to specific downstream tasks requires fine-tuning, which can be prohibitively expensive in computational resources. Parameter-Efficient Fine-Tuning (PEFT) methods address this challenge by selectively updating only a small subset of parameters. Meanwhile, Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, making it ideal for privacy-sensitive applications. This survey provides a comprehensive review of the integration of PEFT techniques within federated learning environments. We systematically categorize existing approaches into three main groups: Additive PEFT (which introduces new trainable parameters), Selective PEFT (which fine-tunes only subsets of existing parameters), and Reparameterized PEFT (which transforms model architectures to enable efficient updates). For each category, we analyze how these methods address the unique challenges of federated settings, including data heterogeneity, communication efficiency, computational constraints, and privacy concerns. We further organize the literature based on application domains, covering both natural language processing and computer vision tasks. Finally, we discuss promising research directions, including scaling to larger foundation models, theoretical analysis of federated PEFT methods, and sustainable approaches for resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Adapting large foundation models efficiently to downstream tasks
Integrating parameter-efficient fine-tuning in federated learning settings
Addressing data heterogeneity and privacy in federated PEFT methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-Efficient Fine-Tuning (PEFT) reduces computational costs
Federated Learning (FL) enables privacy-preserving collaborative training
Three PEFT categories: Additive, Selective, Reparameterized methods