🤖 AI Summary
In online reinforcement learning for recommendation systems, the coupling of policy stochasticity and environmental uncertainty leads to high bias in TD value estimation and slow convergence. To address this, we propose a value function decomposition framework that, for the first time, disentangles the TD error into policy-driven and environment-driven components, enabling unbiased value estimation. Building upon a Markovian recommendation process formulation, we design a decomposition-based temporal difference loss and integrate an online simulation training with offline evaluation paradigm. Extensive experiments on multiple public datasets under online simulation settings demonstrate that our approach reduces value estimation error by 23.6%, improves long-term cumulative reward by 17.4%, accelerates convergence by approximately 40%, and significantly enhances exploration robustness and training efficiency.
📝 Abstract
Recent advances in recommender systems have shown that user-system interaction essentially formulates long-term optimization problems, and online reinforcement learning can be adopted to improve recommendation performance. The general solution framework incorporates a value function that estimates the user's expected cumulative rewards in the future and guides the training of the recommendation policy. To avoid local maxima, the policy may explore potential high-quality actions during inference to increase the chance of finding better future rewards. To accommodate the stepwise recommendation process, one widely adopted approach to learning the value function is learning from the difference between the values of two consecutive states of a user. However, we argue that this paradigm involves an incorrect approximation in the stochastic process. Specifically, between the current state and the next state in each training sample, there exist two separate random factors from the stochastic policy and the uncertain user environment. Original temporal difference (TD) learning under these mixed random factors may result in a suboptimal estimation of the long-term rewards. As a solution, we show that these two factors can be separately approximated by decomposing the original temporal difference loss. The disentangled learning framework can achieve a more accurate estimation with faster learning and improved robustness against action exploration. As empirical verification of our proposed method, we conduct offline experiments with online simulated environments built based on public datasets.