FedHFT: Efficient Federated Finetuning with Heterogeneous Edge Clients

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of client data heterogeneity, scarce labeled data, and significant computational resource disparities in edge environments, this paper proposes an efficient framework for personalized federated fine-tuning of large language models (LLMs). Methodologically, it introduces a hybrid mask adapter to enable lightweight parameter isolation and client-specific modeling, integrated with client clustering and a bilevel optimization mechanism to jointly tackle non-IID data distributions and system heterogeneity. During local fine-tuning, only mask parameters are updated—substantially reducing communication and computation overhead while preserving data privacy. Extensive experiments on diverse natural language understanding tasks demonstrate that our approach achieves an average 3.2% accuracy improvement over state-of-the-art heterogeneous federated learning baselines, with 1.8× higher training efficiency. The framework delivers strong performance, adaptability across heterogeneous clients, and scalability to large-scale deployments.

Technology Category

Application Category

📝 Abstract
Fine-tuning pre-trained large language models (LLMs) has become a common practice for personalized natural language understanding (NLU) applications on downstream tasks and domain-specific datasets. However, there are two main challenges: (i) limited and/or heterogeneous data for fine-tuning due to proprietary data confidentiality or privacy requirements, and (ii) varying computation resources available across participating clients such as edge devices. This paper presents FedHFT - an efficient and personalized federated fine-tuning framework to address both challenges. First, we introduce a mixture of masked adapters to handle resource heterogeneity across participating clients, enabling high-performance collaborative fine-tuning of pre-trained language model(s) across multiple clients in a distributed setting, while keeping proprietary data local. Second, we introduce a bi-level optimization approach to handle non-iid data distribution based on masked personalization and client clustering. Extensive experiments demonstrate significant performance and efficiency improvements over various natural language understanding tasks under data and resource heterogeneity compared to representative heterogeneous federated learning methods.
Problem

Research questions and friction points this paper is trying to address.

Handling heterogeneous data distribution across edge clients
Addressing varying computational resources in federated learning
Enabling efficient personalized fine-tuning with data privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of masked adapters handles resource heterogeneity
Bi-level optimization manages non-iid data distribution
Client clustering enables personalized federated fine-tuning
🔎 Similar Papers
No similar papers found.