🤖 AI Summary
To address the challenge of personalized modeling in federated learning caused by client data heterogeneity, this paper proposes pFedSeq—a novel framework that models historical adapter updates from clients as time-series sequences for the first time. It introduces a server-side sequence learner based on Selective State Space Models (SSMs) to explicitly capture inter-client and inter-round dependencies among gradient updates, thereby enabling efficient personalized fine-tuning of the base model. By integrating federated adapter fine-tuning with temporal modeling of historical gradient sequences, pFedSeq achieves significant improvements over existing state-of-the-art personalized federated learning methods across four standard benchmark datasets. Experimental results validate both the effectiveness and generalizability of modeling historical update sequences for enhancing personalization performance.
📝 Abstract
Personalized federated learning (PFL) studies effective model personalization to address the data heterogeneity issue among clients in traditional federated learning (FL). Existing PFL approaches mainly generate personalized models by relying solely on the clients' latest updated models while ignoring their previous updates, which may result in suboptimal personalized model learning. To bridge this gap, we propose a novel framework termed pFedSeq, designed for personalizing adapters to fine-tune a foundation model in FL. In pFedSeq, the server maintains and trains a sequential learner, which processes a sequence of past adapter updates from clients and generates calibrations for personalized adapters. To effectively capture the cross-client and cross-step relations hidden in previous updates and generate high-performing personalized adapters, pFedSeq adopts the powerful selective state space model (SSM) as the architecture of sequential learner. Through extensive experiments on four public benchmark datasets, we demonstrate the superiority of pFedSeq over state-of-the-art PFL methods.