🤖 AI Summary
To address the dual challenges of high communication overhead and severe data heterogeneity in federated fine-tuning of large language models (LLMs) over distributed private datasets, this paper proposes FedTT—a tensorized adapter architecture—and its enhanced variant, FedTT+. FedTT tensorizes parameter-efficient fine-tuning (PEFT) adapters and integrates them into a federated learning framework, enabling cross-device and cross-institutional deployment. FedTT+ further introduces an adaptive tensor factor freezing mechanism to significantly improve robustness under non-IID data distributions. The approach synergistically combines tensor decomposition, adapter injection (into encoder/decoder blocks), and federated optimization. Extensive evaluation on BERT and LLaMA demonstrates that FedTT and FedTT+ match or surpass state-of-the-art federated PEFT methods in accuracy while reducing communication costs by up to 10×. To our knowledge, this is the first work achieving highly efficient, robust, and low-overhead parameter-efficient federated fine-tuning of LLMs.
📝 Abstract
Parameter-efficient fine-tuning (PEFT) methods typically assume that Large Language Models (LLMs) are trained on data from a single device or client. However, real-world scenarios often require fine-tuning these models on private data distributed across multiple devices. Federated Learning (FL) offers an appealing solution by preserving user privacy, as sensitive data remains on local devices during training. Nonetheless, integrating PEFT methods into FL introduces two main challenges: communication overhead and data heterogeneity. In this paper, we introduce FedTT and FedTT+, methods for adapting LLMs by integrating tensorized adapters into client-side models' encoder/decoder blocks. FedTT is versatile and can be applied to both cross-silo FL and large-scale cross-device FL. FedTT+, an extension of FedTT tailored for cross-silo FL, enhances robustness against data heterogeneity by adaptively freezing portions of tensor factors, further reducing the number of trainable parameters. Experiments on BERT and LLaMA models demonstrate that our proposed methods successfully address data heterogeneity challenges and perform on par or even better than existing federated PEFT approaches while achieving up to 10$ imes$ reduction in communication cost.