๐ค AI Summary
Traditional ad ranking utility functions suffer from ambiguous optimization objectives, highly coupled parameters, and insufficient personalization and seasonality adaptation due to manual tuning. This paper proposes a deep reinforcement learningโbased framework for personalized, dynamic utility parameter optimization, formulating parameter tuning as a policy learning problem. It learns the optimal policy end-to-end directly from online serving logs, circumventing high-variance value estimation while enabling real-time adaptation and user-level personalization. Our approach innovatively integrates multi-objective reward design with policy gradient optimization. Large-scale A/B testing demonstrates significant improvements over manually tuned baselines: +9.7% in click-through rate (CTR) and +7.7% in long-click rate (LCR), substantially enhancing the tripartite value balance among platforms, advertisers, and users.
๐ Abstract
The ranking utility function in an ad recommender system, which linearly combines predictions of various business goals, plays a central role in balancing values across the platform, advertisers, and users. Traditional manual tuning, while offering simplicity and interpretability, often yields suboptimal results due to its unprincipled tuning objectives, the vast amount of parameter combinations, and its lack of personalization and adaptability to seasonality. In this work, we propose a general Deep Reinforcement Learning framework for Personalized Utility Tuning (DRL-PUT) to address the challenges of multi-objective optimization within ad recommender systems. Our key contributions include: 1) Formulating the problem as a reinforcement learning task: given the state of an ad request, we predict the optimal hyperparameters to maximize a pre-defined reward. 2) Developing an approach to directly learn an optimal policy model using online serving logs, avoiding the need to estimate a value function, which is inherently challenging due to the high variance and unbalanced distribution of immediate rewards. We evaluated DRL-PUT through an online A/B experiment in Pinterest's ad recommender system. Compared to the baseline manual utility tuning approach, DRL-PUT improved the click-through rate by 9.7% and the long click-through rate by 7.7% on the treated segment. We conducted a detailed ablation study on the impact of different reward definitions and analyzed the personalization aspect of the learned policy model.