Kernel $epsilon$-Greedy for Multi-Armed Bandits with Covariates

📅 2023-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the multi-armed bandit problem with high-dimensional contextual covariates, aiming to efficiently estimate and select the arm with the optimal mean reward function. We propose an online ε-greedy policy based on weighted kernel ridge regression, which nonparametrically models the unknown mean reward functions in a reproducing kernel Hilbert space (RKHS). To our knowledge, this is the first work integrating kernel methods with an online ε-greedy mechanism, featuring dynamically decaying exploration rates {ε_t} and regularization parameters {λ_t}. Theoretically, we establish consistency of the estimator and derive a sublinear regret bound that depends on the effective dimension of the RKHS. Moreover, under a finite-dimensional RKHS and a margin condition, the algorithm achieves the optimal O(√T) regret rate. This work provides a novel framework for online decision-making with high-dimensional contexts, balancing theoretical guarantees with practical implementability.
📝 Abstract
We consider the $epsilon$-greedy strategy for the multi-arm bandit with covariates (MABC) problem, where the mean reward functions are assumed to lie in a reproducing kernel Hilbert space (RKHS). We propose to estimate the unknown mean reward functions using an online weighted kernel ridge regression estimator, and show the resultant estimator to be consistent under appropriate decay rates of the exploration probability sequence, ${epsilon_t}_t$, and regularization parameter, ${lambda_t}_t$. Moreover, we show that for any choice of kernel and the corresponding RKHS, we achieve a sub-linear regret rate depending on the intrinsic dimensionality of the RKHS. Furthermore, we achieve the optimal regret rate of $sqrt{T}$ under a margin condition for finite-dimensional RKHS.
Problem

Research questions and friction points this paper is trying to address.

Estimating mean reward functions in RKHS for MABC
Achieving sub-linear regret rate in RKHS dimensionality
Optimal sqrt(T) regret under margin condition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online weighted kernel ridge regression estimator
Sub-linear regret rate in RKHS
Optimal regret rate under margin condition
🔎 Similar Papers
No similar papers found.
Sakshi Arya
Sakshi Arya
Assistant Professor at Case Western Reserve University
Statistics
B
Bharath K. Sriperumbudur
Department of Statistics, Pennsylvania State University