🤖 AI Summary
This paper studies incentivizing myopic agents to explore effectively in infinite-armed, continuous-action-space Lipschitz multi-armed bandits, where the core challenge lies in jointly modeling incentive-induced reward drift (biased feedback) and the infinite arm set under a high-dimensional metric space. We propose the first incentive-compatible, covering-dimension-driven discretization algorithm, achieving simultaneous sublinear cumulative regret and total compensation—previously unattained. The method is extended to contextual settings, yielding a unified generalization framework. We prove tight upper bounds of $ ilde{O}(T^{(d+1)/(d+2)})$ on both regret and compensation, where $d$ denotes the covering dimension. Numerical experiments validate incentive efficacy and convergence. Our main contributions are: (1) formalizing the incentive-aware Lipschitz bandit model; (2) designing the first algorithm that jointly ensures incentive compatibility and statistical efficiency; and (3) establishing a covering-dimension-dominated, tight upper-bound analysis paradigm.
📝 Abstract
We study incentivized exploration in multi-armed bandit (MAB) settings with infinitely many arms modeled as elements in continuous metric spaces. Unlike classical bandit models, we consider scenarios where the decision-maker (principal) incentivizes myopic agents to explore beyond their greedy choices through compensation, but with the complication of reward drift--biased feedback arising due to the incentives. We propose novel incentivized exploration algorithms that discretize the infinite arm space uniformly and demonstrate that these algorithms simultaneously achieve sublinear cumulative regret and sublinear total compensation. Specifically, we derive regret and compensation bounds of $Tilde{O}(T^{d+1/d+2})$, with $d$ representing the covering dimension of the metric space. Furthermore, we generalize our results to contextual bandits, achieving comparable performance guarantees. We validate our theoretical findings through numerical simulations.