Regret Analysis of Sleeping Competing Bandits

๐Ÿ“… 2026-03-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitation of classical competitive multi-armed bandit models, which fail to capture the realistic scenario where agents and arms may become arbitrarily unavailable over time. To this end, we formally introduce the Dormant Competitive Multi-Armed Bandit model, which incorporates intermittent availability and extends the definition of regret accordingly. Leveraging tools from online learning and stable matching theory, we propose a novel algorithm that achieves stable matching under dynamic availability constraints. Our theoretical analysis shows that the algorithm is asymptotically optimal when the number of arms $K$ greatly exceeds the number of agents $N$, with an upper regret bound of $O(NK \log T_i / \Delta^2)$ and a matching lower bound of $\Omega(N(K - N + 1) \log T_i / \Delta^2)$.

Technology Category

Application Category

๐Ÿ“ Abstract
The Competing Bandits framework is a recently emerging area that integrates multi-armed bandits in online learning with stable matching in game theory. While conventional models assume that all players and arms are constantly available, in real-world problems, their availability can vary arbitrarily over time. In this paper, we formulate this setting as Sleeping Competing Bandits. To analyze this problem, we naturally extend the regret definition used in existing competing bandits and derive regret bounds for the proposed model. We propose an algorithm that simultaneously achieves an asymptotic regret bound of $\mathrm{O}\left(NK\log T_{i}/ฮ”^2\right)$ under reasonable assumptions, where $N$ is the number of players, $K$ is the number of arms, $T_{i}$ is the number of rounds of each player $p_i$, and $ฮ”$ is the minimum reward gap. We also provide a regret lower bound of $\mathrmฮฉ\left( N(K-N+1)\log T_{i}/ฮ”^2 \right)$ under the same assumptions. This implies that our algorithm is asymptotically optimal in the regime where the number of arms $K$ is relatively larger than the number of players $N$.
Problem

Research questions and friction points this paper is trying to address.

Sleeping Competing Bandits
multi-armed bandits
stable matching
online learning
regret analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sleeping Competing Bandits
regret analysis
asymptotic optimality
online learning
stable matching
๐Ÿ”Ž Similar Papers
No similar papers found.