🤖 AI Summary
This study addresses the problem of universal online prediction for stochastic sequences, aiming to establish asymptotically vanishing regret bounds that hold with high probability over a finite time horizon. By integrating martingale inequalities, information-theoretic tools, and generalization bound techniques, the work presents the first high-probability regret upper bound for stochastic processes over countable alphabets. The result guarantees a convergence rate of $O(\sqrt{T^{-1}\log(1/\delta)})$ with probability at least $1-\delta$, formally matching the best-known expected regret bounds. Moreover, the analysis demonstrates that, without additional assumptions, the dependence on the failure probability $\delta$ is information-theoretically optimal, thereby establishing a fundamental lower bound on the achievable exponent in the confidence term.
📝 Abstract
We revisit the classical problem of universal prediction of stochastic sequences with a finite time horizon $T$ known to the learner. The question we investigate is whether it is possible to derive vanishing regret bounds that hold with high probability, complementing existing bounds from the literature that hold in expectation. We propose such high-probability bounds which have a very similar form as the prior expectation bounds. For the case of universal prediction of a stochastic process over a countable alphabet, our bound states a convergence rate of $\mathcal{O}(T^{-1/2} δ^{-1/2})$ with probability as least $1-δ$ compared to prior known in-expectation bounds of the order $\mathcal{O}(T^{-1/2})$. We also propose an impossibility result which proves that it is not possible to improve the exponent of $δ$ in a bound of the same form without making additional assumptions.