Proxy Model-Guided Reinforcement Learning for Client Selection in Federated Recommendation

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated recommendation systems, existing client selection strategies overlook statistical heterogeneity among users and fail to address recommendation-specific challenges—including high-dimensional long-tail item distributions, sparse model updates, and prohibitive overhead in contribution evaluation. To tackle these issues, we propose SA-Agent, a surrogate-model-guided reinforcement learning framework. It introduces a dual-branch ProxyNCF architecture for lightweight, training-free client contribution estimation; and designs a staleness-aware state space and reward function, pioneering the integration of surrogate modeling with staleness-aware RL for client selection. Extensive experiments on multiple public benchmarks demonstrate that SA-Agent significantly improves recommendation accuracy (average +12.3% Recall@10), enhances coverage of long-tail item embedding updates, accelerates model convergence, and reduces communication overhead.

Technology Category

Application Category

📝 Abstract
Federated recommender systems have emerged as a promising privacy-preserving paradigm, enabling personalized recommendation services without exposing users' raw data. By keeping data local and relying on a central server to coordinate training across distributed clients, FedRSs protect user privacy while collaboratively learning global models. However, most existing FedRS frameworks adopt fully random client selection strategy in each training round, overlooking the statistical heterogeneity of user data arising from diverse preferences and behavior patterns, thereby resulting in suboptimal model performance. While some client selection strategies have been proposed in the broader federated learning literature, these methods are typically designed for generic tasks and fail to address the unique challenges of recommendation scenarios, such as expensive contribution evaluation due to the large number of clients, and sparse updates resulting from long-tail item distributions. To bridge this gap, we propose ProxyRL-FRS, a proxy model-guided reinforcement learning framework tailored for client selection in federated recommendation. Specifically, we first introduce ProxyNCF, a dual-branch model deployed on each client, which augments standard Neural Collaborative Filtering with an additional proxy model branch that provides lightweight contribution estimation, thus eliminating the need for expensive per-round local training traditionally required to evaluate a client's contribution. Furthermore, we design a staleness-aware SA reinforcement learning agent that selects clients based on the proxy-estimated contribution, and is guided by a reward function balancing recommendation accuracy and embedding staleness, thereby enriching the update coverage of item embeddings. Experiments conducted on public recommendation datasets demonstrate the effectiveness of ProxyRL-FRS.
Problem

Research questions and friction points this paper is trying to address.

Improves client selection in federated recommendation systems
Addresses statistical heterogeneity in user data preferences
Reduces expensive contribution evaluation costs in FedRS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proxy model for lightweight client contribution estimation
Reinforcement learning for optimized client selection
Staleness-aware reward balancing accuracy and coverage