🤖 AI Summary
This paper studies the multi-armed bandit problem with high-dimensional contextual covariates, aiming to efficiently estimate and select the arm with the optimal mean reward function. We propose an online ε-greedy policy based on weighted kernel ridge regression, which nonparametrically models the unknown mean reward functions in a reproducing kernel Hilbert space (RKHS). To our knowledge, this is the first work integrating kernel methods with an online ε-greedy mechanism, featuring dynamically decaying exploration rates {ε_t} and regularization parameters {λ_t}. Theoretically, we establish consistency of the estimator and derive a sublinear regret bound that depends on the effective dimension of the RKHS. Moreover, under a finite-dimensional RKHS and a margin condition, the algorithm achieves the optimal O(√T) regret rate. This work provides a novel framework for online decision-making with high-dimensional contexts, balancing theoretical guarantees with practical implementability.
📝 Abstract
We consider the $epsilon$-greedy strategy for the multi-arm bandit with covariates (MABC) problem, where the mean reward functions are assumed to lie in a reproducing kernel Hilbert space (RKHS). We propose to estimate the unknown mean reward functions using an online weighted kernel ridge regression estimator, and show the resultant estimator to be consistent under appropriate decay rates of the exploration probability sequence, ${epsilon_t}_t$, and regularization parameter, ${lambda_t}_t$. Moreover, we show that for any choice of kernel and the corresponding RKHS, we achieve a sub-linear regret rate depending on the intrinsic dimensionality of the RKHS. Furthermore, we achieve the optimal regret rate of $sqrt{T}$ under a margin condition for finite-dimensional RKHS.