🤖 AI Summary
This paper addresses the exploration-exploitation imbalance in neural contextual bandits. We propose Neural-σ²-LinearUCB, a variance-aware algorithm that explicitly incorporates a time-varying upper bound σₜ² on the noise variance into the neural UCB framework—its first such integration. The method combines an adaptive variance estimation mechanism with analysis grounded in Neural Tangent Kernel (NTK) theory to substantially improve uncertainty quantification accuracy. Theoretically, we prove that its oracle variant achieves a tighter regret bound than existing neural UCB approaches. Empirically, on synthetic data, UCI benchmarks, MNIST, and CIFAR-10, Neural-σ²-LinearUCB significantly reduces cumulative regret and improves confidence interval calibration, while maintaining computational efficiency comparable to state-of-the-art methods.
📝 Abstract
By leveraging the representation power of deep neural networks, neural upper confidence bound (UCB) algorithms have shown success in contextual bandits. To further balance the exploration and exploitation, we propose Neural-$sigma^2$-LinearUCB, a variance-aware algorithm that utilizes $sigma^2_t$, i.e., an upper bound of the reward noise variance at round $t$, to enhance the uncertainty quantification quality of the UCB, resulting in a regret performance improvement. We provide an oracle version for our algorithm characterized by an oracle variance upper bound $sigma^2_t$ and a practical version with a novel estimation for this variance bound. Theoretically, we provide rigorous regret analysis for both versions and prove that our oracle algorithm achieves a better regret guarantee than other neural-UCB algorithms in the neural contextual bandits setting. Empirically, our practical method enjoys a similar computational efficiency, while outperforming state-of-the-art techniques by having a better calibration and lower regret across multiple standard settings, including on the synthetic, UCI, MNIST, and CIFAR-10 datasets.