🤖 AI Summary
Traditional generative recommender systems suffer from impoverished and unreliable inference due to oversimplified, single-view item semantic representations. To address this, we propose a Multi-Dynamic Semantic Reasoning framework for sequential recommendation. Our approach features: (1) multi-semantic inference paths enabled by a Mixture-of-Experts (MoE)-based parallel quantized codebook for fine-grained semantic modeling; (2) a consistency-guided self-reflective pruning mechanism to enhance inference reliability; and (3) a preference-aligned, multi-step reward-augmented training strategy that jointly optimizes recommendation diversity and accuracy. Extensive experiments on multiple real-world benchmark datasets and large-scale online A/B tests demonstrate consistent and significant improvements over state-of-the-art methods. The framework achieves superior generalization capability and robust industrial deployability, validating its effectiveness in practical recommendation scenarios.
📝 Abstract
Sequential recommendation aims to predict a user's next action in large-scale recommender systems. While traditional methods often suffer from insufficient information interaction, recent generative recommendation models partially address this issue by directly generating item predictions. To better capture user intents, recent studies have introduced a reasoning process into generative recommendation, significantly improving recommendation performance. However, these approaches are constrained by the singularity of item semantic representations, facing challenges such as limited diversity in reasoning pathways and insufficient reliability in the reasoning process. To tackle these issues, we introduce REG4Rec, a reasoning-enhanced generative model that constructs multiple dynamic semantic reasoning paths alongside a self-reflection process, ensuring high-confidence recommendations. Specifically, REG4Rec utilizes an MoE-based parallel quantization codebook (MPQ) to generate multiple unordered semantic tokens for each item, thereby constructing a larger-scale diverse reasoning space. Furthermore, to enhance the reliability of reasoning, we propose a training reasoning enhancement stage, which includes Preference Alignment for Reasoning (PARS) and a Multi-Step Reward Augmentation (MSRA) strategy. PARS uses reward functions tailored for recommendation to enhance reasoning and reflection, while MSRA introduces future multi-step actions to improve overall generalization. During inference, Consistency-Oriented Self-Reflection for Pruning (CORP) is proposed to discard inconsistent reasoning paths, preventing the propagation of erroneous reasoning. Lastly, we develop an efficient offline training strategy for large-scale recommendation. Experiments on real-world datasets and online evaluations show that REG4Rec delivers outstanding performance and substantial practical value.