Breaking the Aggregation Bottleneck in Federated Recommendation: A Personalized Model Merging Approach

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated recommendation, high client heterogeneity causes server-side global aggregation to deviate from local optima, degrading personalization performance—a phenomenon termed the “aggregation bottleneck.” This work is the first to theoretically characterize this issue and proposes a lightweight, module-free elastic fusion framework that directly integrates outputs from both the global and local models. During collaborative training, the method dynamically balances global knowledge sharing with local personalization preservation, circumventing the need for complex customization mechanisms (e.g., personalized heads or meta-learning) required by existing personalized federated recommendation approaches. Extensive experiments on multiple real-world datasets demonstrate that the proposed method consistently outperforms state-of-the-art baselines, achieving simultaneous improvements in both recommendation accuracy and personalization fidelity.

Technology Category

Application Category

📝 Abstract
Federated recommendation (FR) facilitates collaborative training by aggregating local models from massive devices, enabling client-specific personalization while ensuring privacy. However, we empirically and theoretically demonstrate that server-side aggregation can undermine client-side personalization, leading to suboptimal performance, which we term the aggregation bottleneck. This issue stems from the inherent heterogeneity across numerous clients in FR, which drives the globally aggregated model to deviate from local optima. To this end, we propose FedEM, which elastically merges the global and local models to compensate for impaired personalization. Unlike existing personalized federated recommendation (pFR) methods, FedEM (1) investigates the aggregation bottleneck in FR through theoretical insights, rather than relying on heuristic analysis; (2) leverages off-the-shelf local models rather than designing additional mechanisms to boost personalization. Extensive experiments on real-world datasets demonstrate that our method preserves client personalization during collaborative training, outperforming state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses aggregation bottleneck in federated recommendation systems
Mitigates performance degradation from server-side model aggregation
Resolves client heterogeneity issues in personalized federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Elastically merges global and local models
Leverages off-the-shelf local models directly
Compensates impaired personalization through model merging
🔎 Similar Papers
No similar papers found.
J
Jundong Chen
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education, China; Beijing Jiaotong University
H
Honglei Zhang
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education, China; Beijing Jiaotong University
C
Chunxu Zhang
Jilin University
F
Fangyuan Luo
Beijing University of Technology
Yidong Li
Yidong Li
Beijing Jiaotong University
privacy preservingdata miningsocial network analysismultimedia computing