🤖 AI Summary
In federated recommendation, high client heterogeneity causes server-side global aggregation to deviate from local optima, degrading personalization performance—a phenomenon termed the “aggregation bottleneck.” This work is the first to theoretically characterize this issue and proposes a lightweight, module-free elastic fusion framework that directly integrates outputs from both the global and local models. During collaborative training, the method dynamically balances global knowledge sharing with local personalization preservation, circumventing the need for complex customization mechanisms (e.g., personalized heads or meta-learning) required by existing personalized federated recommendation approaches. Extensive experiments on multiple real-world datasets demonstrate that the proposed method consistently outperforms state-of-the-art baselines, achieving simultaneous improvements in both recommendation accuracy and personalization fidelity.
📝 Abstract
Federated recommendation (FR) facilitates collaborative training by aggregating local models from massive devices, enabling client-specific personalization while ensuring privacy. However, we empirically and theoretically demonstrate that server-side aggregation can undermine client-side personalization, leading to suboptimal performance, which we term the aggregation bottleneck. This issue stems from the inherent heterogeneity across numerous clients in FR, which drives the globally aggregated model to deviate from local optima. To this end, we propose FedEM, which elastically merges the global and local models to compensate for impaired personalization. Unlike existing personalized federated recommendation (pFR) methods, FedEM (1) investigates the aggregation bottleneck in FR through theoretical insights, rather than relying on heuristic analysis; (2) leverages off-the-shelf local models rather than designing additional mechanisms to boost personalization. Extensive experiments on real-world datasets demonstrate that our method preserves client personalization during collaborative training, outperforming state-of-the-art baselines.