Reinforced Preference Optimization for Recommendation

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative recommender systems face two key bottlenecks: difficulty in modeling high-quality negative samples and overreliance on implicit rewards. To address these, this paper proposes the first reinforcement learning (RL)-based optimization framework tailored for generative recommendation. Our method employs constrained beam search to generate diverse, hard negative samples; designs a fine-grained, verifiable reward function integrating rule-based accuracy and auxiliary ranking signals; and applies policy optimization to enable precise preference modeling. This work constitutes the first systematic exploration of the RL design space for generative recommendation, supporting heterogeneous large language model (LLM) backbones and scales. Extensive experiments on three real-world datasets demonstrate significant improvements over conventional recommenders and state-of-the-art LLM-based baselines, with strong generalization capability and consistent performance gains.

Technology Category

Application Category

📝 Abstract
Recent breakthroughs in large language models (LLMs) have fundamentally shifted recommender systems from discriminative to generative paradigms, where user behavior modeling is achieved by generating target items conditioned on historical interactions. Yet current generative recommenders still suffer from two core limitations: the lack of high-quality negative modeling and the reliance on implicit rewards. Reinforcement learning with verifiable rewards (RLVR) offers a natural solution by enabling on-policy sampling of harder negatives and grounding optimization in explicit reward signals. However, applying RLVR to generative recommenders remains non-trivial. Its unique generation space often leads to invalid or repetitive items that undermine sampling efficiency, and ranking supervision is sparse since most items receive identical zero rewards. To address these challenges, we propose Reinforced Preference Optimization for Recommendation (ReRe), a reinforcement-based paradigm tailored to LLM-based recommenders, an important direction in generative recommendation. ReRe incorporates constrained beam search to improve sampling efficiency and diversify hard negatives, while augmenting rule-based accuracy rewards with auxiliary ranking rewards for finer-grained supervision. Extensive experiments on three real-world datasets demonstrate that ReRe consistently outperforms both traditional and LLM-based recommenders in ranking performance. Further analysis shows that ReRe not only enhances performance across both base and SFT-initialized models but also generalizes robustly across different backbone families and scales. Beyond empirical gains, we systematically investigate the design space of RLVR in recommendation across generation, sampling strategy, reward modeling, and optimization algorithm, offering insights for future research.
Problem

Research questions and friction points this paper is trying to address.

Generative recommenders lack high-quality negative sampling
Implicit reward reliance limits recommendation optimization
Sparse ranking supervision undermines reinforcement learning effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained beam search for efficient sampling
Auxiliary ranking rewards for fine-grained supervision
Reinforcement-based paradigm tailored to LLM recommenders
🔎 Similar Papers
No similar papers found.
J
Junfei Tan
Taobao & Tmall Group of Alibaba, China
Y
Yuxin Chen
National University of Singapore
An Zhang
An Zhang
University of Science and Technology
Generative ModelsTrustworthy AIAgentic AIRecommender System
Junguang Jiang
Junguang Jiang
Taobao & Tmall Group of Alibaba, China
B
Bin Liu
Taobao & Tmall Group of Alibaba, China
Ziru Xu
Ziru Xu
Alibaba Group
H
Han Zhu
Taobao & Tmall Group of Alibaba, China
J
Jian Xu
Taobao & Tmall Group of Alibaba, China
B
Bo Zheng
Taobao & Tmall Group of Alibaba, China
X
Xiang Wang
National University of Singapore