🤖 AI Summary
To address data sparsity and the cold-start problem for new clients, this paper proposes PeFLL—a meta-learning-based personalized federated learning framework. PeFLL jointly trains a client embedding network and a parameterized hypernetwork to implicitly capture inter-client similarity, enabling zero-shot personalized model generation for unseen clients without local fine-tuning. Its key contributions are twofold: (i) it establishes the first generalization-theoretic guarantee for personalized federated learning, and (ii) it supports plug-and-play deployment. Extensive experiments on multiple benchmarks demonstrate state-of-the-art performance: PeFLL achieves up to a 12.7% accuracy improvement for new clients and reduces both communication and computational overhead by over 40% compared to existing methods.
📝 Abstract
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones. At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork. The embedding network is used to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork takes as input such descriptors and outputs the parameters of fully personalized client models. In combination, both networks constitute a learning algorithm that achieves state-of-the-art performance in several personalized federated learning benchmarks.