🤖 AI Summary
In e-commerce recommendation, users’ limited time budgets create a fundamental trade-off between assessment cost and item relevance, hindering simultaneous optimization of both. Method: This paper proposes a budget-aware slate recommendation framework that formulates recommendation as a time-constrained Markov Decision Process (MDP). We introduce the first unified budget-aware utility function jointly optimizing relevance and user assessment cost. To enable learning, we construct a simulation environment supporting re-ranking data and systematically compare on-policy (PPO) and off-policy (DQN) reinforcement learning (RL) strategies. Contribution/Results: Experiments on Alibaba’s real-world re-ranking dataset demonstrate that our RL policies significantly outperform contextual bandit baselines under stringent time constraints—achieving a 12.7% lift in click-through rate under extreme latency limits. This validates the effectiveness and practicality of latency-sensitive recommendation modeling.
📝 Abstract
Unlike traditional recommendation tasks, finite user time budgets introduce a critical resource constraint, requiring the recommender system to balance item relevance and evaluation cost. For example, in a mobile shopping interface, users interact with recommendations by scrolling, where each scroll triggers a list of items called slate. Users incur an evaluation cost - time spent assessing item features before deciding to click. Highly relevant items having higher evaluation costs may not fit within the user's time budget, affecting engagement. In this position paper, our objective is to evaluate reinforcement learning algorithms that learn patterns in user preferences and time budgets simultaneously, crafting recommendations with higher engagement potential under resource constraints. Our experiments explore the use of reinforcement learning to recommend items for users using Alibaba's Personalized Re-ranking dataset supporting slate optimization in e-commerce contexts. Our contributions include (i) a unified formulation of time-constrained slate recommendation modeled as Markov Decision Processes (MDPs) with budget-aware utilities; (ii) a simulation framework to study policy behavior on re-ranking data; and (iii) empirical evidence that on-policy and off-policy control can improve performance under tight time budgets than traditional contextual bandit-based methods.