UCB for Large-Scale Pure Exploration: Beyond Sub-Gaussianity

๐Ÿ“… 2025-11-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional UCB algorithms for large-scale pure exploration rely on sub-Gaussian assumptions and fail under heavy-tailed or non-sub-Gaussian reward distributions. Method: We propose Meta-UCB, a distribution-free framework requiring only bounded q-th moments (q > 3) of arm rewardsโ€”no prior knowledge of distribution type is needed. It leverages a location-scale structure and a maximum-sample-size stopping rule to enable theoretical analysis without distributional assumptions. Contribution/Results: This work establishes, for the first time, sample optimality of UCB-type algorithms in large-scale non-sub-Gaussian settings. We derive a distribution-independent lower bound on the probability of correct selection. Extensive numerical experiments demonstrate Meta-UCBโ€™s superior robustness and efficacy under heavy-tailed reward distributions, significantly outperforming classical UCB variants.

Technology Category

Application Category

๐Ÿ“ Abstract
Selecting the best alternative from a finite set represents a broad class of pure exploration problems. Traditional approaches to pure exploration have predominantly relied on Gaussian or sub-Gaussian assumptions on the performance distributions of all alternatives, which limit their applicability to non-sub-Gaussian especially heavy-tailed problems. The need to move beyond sub-Gaussianity may become even more critical in large-scale problems, which tend to be especially sensitive to distributional specifications. In this paper, motivated by the widespread use of upper confidence bound (UCB) algorithms in pure exploration and beyond, we investigate their performance in the large-scale, non-sub-Gaussian settings. We consider the simplest category of UCB algorithms, where the UCB value for each alternative is defined as the sample mean plus an exploration bonus that depends only on its own sample size. We abstract this into a meta-UCB algorithm and propose letting it select the alternative with the largest sample size as the best upon stopping. For this meta-UCB algorithm, we first derive a distribution-free lower bound on the probability of correct selection. Building on this bound, we analyze two general non-sub-Gaussian scenarios: (1) all alternatives follow a common location-scale structure and have bounded variance; and (2) when such a structure does not hold, each alternative has a bounded absolute moment of order $q > 3$. In both settings, we show that the meta-UCB algorithm and therefore a broad class of UCB algorithms can achieve the sample optimality. These results demonstrate the applicability of UCB algorithms for solving large-scale pure exploration problems with non-sub-Gaussian distributions. Numerical experiments support our results and provide additional insights into the comparative behaviors of UCB algorithms within and beyond our meta-UCB framework.
Problem

Research questions and friction points this paper is trying to address.

Extends UCB algorithms to non-sub-Gaussian distributions
Addresses large-scale pure exploration with heavy-tailed data
Ensures sample optimality in non-Gaussian bounded variance settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-UCB algorithm for non-sub-Gaussian distributions
Distribution-free lower bound for correct selection probability
Sample optimality in large-scale pure exploration problems
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zaile Li
Technology and Operations Management Area, INSEAD, Fontainebleau, France
W
Weiwei Fan
Advanced Institute of Business and School of Economics and Management, Tongji University, Shanghai, China
L. Jeff Hong
L. Jeff Hong
Department of Industrial and Systems Engineering, University of Minnesota
operations researchstochastic simulationmachine learningrisk management