🤖 AI Summary
This paper identifies a critical limitation in offline contextual bandits: existing off-policy learning (OPL) methods prioritize statistical accuracy of off-policy evaluation (OPE) estimators while overlooking the optimization difficulty of the objective function under large action spaces—a key bottleneck for policy performance. To address this, we propose and theoretically justify a paradigm shift: using a simple weighted log-likelihood objective—despite sacrificing some estimation accuracy—yields superior optimization properties (e.g., gradient stability and convergence guarantees) compared to complex OPE-based objectives. Through rigorous theoretical analysis and systematic experiments in high-dimensional action spaces, we demonstrate that our approach consistently outperforms state-of-the-art OPL algorithms relying on high-accuracy OPE estimators. Our results establish “optimizability” as a more fundamental design criterion than “estimation accuracy,” introducing a new principle for OPL algorithm design.
📝 Abstract
Off-policy evaluation (OPE) and off-policy learning (OPL) are foundational for decision-making in offline contextual bandits. Recent advances in OPL primarily optimize OPE estimators with improved statistical properties, assuming that better estimators inherently yield superior policies. Although theoretically justified, we argue this estimator-centric approach neglects a critical practical obstacle: challenging optimization landscapes. In this paper, we provide theoretical insights and extensive empirical evidence showing that current OPL methods encounter severe optimization issues, particularly as action spaces become large. We demonstrate that simpler weighted log-likelihood objectives enjoy substantially better optimization properties and still recover competitive, often superior, learned policies. Our findings emphasize the necessity of explicitly addressing optimization considerations in the development of OPL algorithms for large action spaces.