Replicability is Asymptotically Free in Multi-armed Bandits

📅 2024-02-12
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the asymptotic cost of replicability in stochastic multi-armed bandits—i.e., minimizing cumulative regret while ensuring high-probability replication of action sequences across independent runs. We propose a randomized decision framework based on confidence-interval calibration. Our key contributions are: (i) the first proof that replicability incurs no asymptotic regret penalty (zero asymptotic cost) as horizon (T o infty); (ii) a principled design paradigm enabling explicit control over the non-replicability probability; and (iii) for the two-armed case, the first information-theoretic lower bound on replicable regret, matched up to a (log log T) factor by our algorithm. Crucially, our method compresses exploration overhead to (O(K^2/ ho^2)) rounds—eliminating the original (O(K^2/ ho^2)) regret amplification—and thereby substantially improves the joint frontier of statistical verifiability and sample efficiency.

Technology Category

Application Category

📝 Abstract
We consider a replicable stochastic multi-armed bandit algorithm that ensures, with high probability, that the algorithm's sequence of actions is not affected by the randomness inherent in the dataset. Replicability allows third parties to reproduce published findings and assists the original researcher in applying standard statistical tests. We observe that existing algorithms require $O(K^2/ ho^2)$ times more regret than nonreplicable algorithms, where $K$ is the number of arms and $ ho$ is the level of nonreplication. However, we demonstrate that this additional cost is unnecessary when the time horizon $T$ is sufficiently large for a given $K, ho$, provided that the magnitude of the confidence bounds is chosen carefully. Therefore, for a large $T$, our algorithm only suffers $K^2/ ho^2$ times smaller amount of exploration than existing algorithms. To ensure the replicability of the proposed algorithms, we incorporate randomness into their decision-making processes. We propose a principled approach to limiting the probability of nonreplication. This approach elucidates the steps that existing research has implicitly followed. Furthermore, we derive the first lower bound for the two-armed replicable bandit problem, which implies the optimality of the proposed algorithms up to a $loglog T$ factor for the two-armed case.
Problem

Research questions and friction points this paper is trying to address.

Multi-Armed Bandit Problem
Regret Minimization
Algorithm Replicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Armed Bandit Algorithm
Reproducibility
Regret Minimization
🔎 Similar Papers
No similar papers found.