Federated Ensemble Learning with Progressive Model Personalization

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the trade-off between personalization and overfitting in federated learning caused by client data heterogeneity by proposing a boosting-inspired progressive personalization framework. The approach constructs ensembles of local client models while incorporating low-rank decomposition or width reduction strategies to enhance personalized representation capacity without excessively increasing model complexity. As the first study to provide non-vacuous generalization guarantees for decoupled personalized federated learning, the proposed method achieves significant performance gains over existing approaches on standard benchmarks—including EMNIST, CIFAR-10/100, and Sent140—with particularly pronounced improvements under highly heterogeneous data distributions.

Technology Category

Application Category

📝 Abstract
Federated Learning provides a privacy-preserving paradigm for distributed learning, but suffers from statistical heterogeneity across clients. Personalized Federated Learning (PFL) mitigates this issue by considering client-specific models. A widely adopted approach in PFL decomposes neural networks into a shared feature extractor and client-specific heads. While effective, this design induces a fundamental tradeoff: deep or expressive shared components hinder personalization, whereas large local heads exacerbate overfitting under limited per-client data. Most existing methods rely on rigid, shallow heads, and therefore fail to navigate this tradeoff in a principled manner. In this work, we propose a boosting-inspired framework that enables a smooth control of this tradeoff. Instead of training a single personalized model, we construct an ensemble of $T$ models for each client. Across boosting iterations, the depth of the personalized component are progressively increased, while its effective complexity is systematically controlled via low-rank factorization or width shrinkage. This design simultaneously limits overfitting and substantially reduces per-client bias by allowing increasingly expressive personalization. We provide theoretical analysis that establishes generalization bounds with favorable dependence on the average local sample size and the total number of clients. Specifically, we prove that the complexity of the shared layers is effectively suppressed, while the dependence on the boosting horizon $T$ is controlled through parameter reduction. Notably, we provide a novel nonlinear generalization guarantee for decoupled PFL models. Extensive experiments on benchmark and real-world datasets (e.g., EMNIST, CIFAR-10/100, and Sent140) demonstrate that the proposed framework consistently outperforms state-of-the-art PFL methods under heterogeneous data distributions.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Statistical Heterogeneity
Personalized Federated Learning
Model Personalization
Overfitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Ensemble Learning
Progressive Personalization
Low-rank Factorization
Boosting Framework
Personalized Federated Learning
🔎 Similar Papers
No similar papers found.
A
Ala Emrani
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
Amir Najafi
Amir Najafi
imec, Belgium
SOC designUltra-low-power on-chip communicationEnergy-efficient architectures
A
Abolfazl Motahari
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran