🤖 AI Summary
Online platforms often discretize continuous incentives for A/B testing, which hinders extrapolation to untested intervention levels and overlooks user heterogeneity, leading to suboptimal decisions. This work proposes a Deep Learning for Policy Targeting (DLPT) framework that establishes, for the first time, a theoretical foundation for learning personalized continuous intervention policies from discrete randomized controlled trials. We prove the asymptotic unbiasedness and consistency of the policy value estimator and derive a root-n regret bound. By integrating high-dimensional user features, DLPT enables end-to-end optimization of personalized continuous policies. In a real-world incentive experiment conducted in collaboration with a leading social media platform, DLPT substantially outperforms existing benchmarks, achieving significant improvements in both policy value estimation and identification of personalized optimal interventions.
📝 Abstract
Randomized Controlled Trials (RCTs), or A/B testing, have become the gold standard for optimizing various operational policies on online platforms. However, RCTs on these platforms typically cover a limited number of discrete treatment levels, while the platforms increasingly face complex operational challenges involving optimizing continuous variables, such as pricing and incentive programs. The current industry practice involves discretizing these continuous decision variables into several treatment levels and selecting the optimal discrete treatment level. This approach, however, often leads to suboptimal decisions as it cannot accurately extrapolate performance for untested treatment levels and fails to account for heterogeneity in treatment effects across user characteristics. This study addresses these limitations by developing a theoretically solid and empirically verified framework to learn personalized continuous policies based on high-dimensional user characteristics, using observations from an RCT with only a discrete set of treatment levels. Specifically, we introduce a deep learning for policy targeting (DLPT) framework that includes both personalized policy value estimation and personalized policy learning. We prove that our policy value estimators are asymptotically unbiased and consistent, and the learned policy achieves a root-n-regret bound. We empirically validate our methods in collaboration with a leading social media platform to optimize incentive levels for content creation. Results demonstrate that our DLPT framework significantly outperforms existing benchmarks, achieving substantial improvements in both evaluating the value of policies for each user group and identifying the optimal personalized policy.