🤖 AI Summary
In average-reward Markov decision processes (AMDPs), existing regret-minimizing algorithms based on the doubling trick (DT) incur linear regret Ω(T) during poor exploration episodes—those where suboptimal policies are executed. To address this, we propose the **Vanishing Multiplicative (VM) episode termination rule**, which replaces DT while preserving the Extended Value Iteration (EVI) framework. Our key insight is the first formal characterization of how confidence interval widths behave differently in good versus poor episodes; leveraging this, we design an adaptive episode termination mechanism that dynamically tightens termination thresholds. Theoretically, VM reduces total regret from Ω(T) to O(log T). Empirically, it significantly improves single-run stability and accelerates recovery from poor episodes.
📝 Abstract
In average reward Markov decision processes, state-of-the-art algorithms for regret minimization follow a well-established framework: They are model-based, optimistic and episodic. First, they maintain a confidence region from which optimistic policies are computed using a well-known subroutine called Extended Value Iteration (EVI). Second, these policies are used over time windows called episodes, each ended by the Doubling Trick (DT) rule or a variant thereof. In this work, without modifying EVI, we show that there is a significant advantage in replacing (DT) by another simple rule, that we call the Vanishing Multiplicative (VM) rule. When managing episodes with (VM), the algorithm's regret is, both in theory and in practice, as good if not better than with (DT), while the one-shot behavior is greatly improved. More specifically, the management of bad episodes (when sub-optimal policies are being used) is much better under (VM) than (DT) by making the regret of exploration logarithmic rather than linear. These results are made possible by a new in-depth understanding of the contrasting behaviors of confidence regions during good and bad episodes.