π€ AI Summary
This paper investigates Blackwell optimality and the identification of lower-order bias-optimal policies in Markov decision processes (MDPs). Addressing the asymptotic nature and limited practicality of average-reward optimality, we propose a learning algorithm based on vanishing error probability to sequentially compute *k*-order bias-optimal policies. Our key contribution is a universal stopping criterion independent of the optimality order: when the MDP admits a unique Bellman-optimal policy, the algorithm terminates in finite time. Integrating reinforcement learning design, statistical hypothesis testing, and Bellman equation analysis, the method achieves asymptotically consistent identification of all orders of optimal policies. We prove that, as the error probability tends to zero, the algorithm identifies all policies up to Blackwell optimality with probability one, and guarantees verifiable finite-time convergence.
π Abstract
Although average gain optimality is a commonly adopted performance measure in Markov Decision Processes (MDPs), it is often too asymptotic. Further incorporating measures of immediate losses leads to the hierarchy of bias optimalities, all the way up to Blackwell optimality. In this paper, we investigate the problem of identifying policies of such optimality orders. To that end, for each order, we construct a learning algorithm with vanishing probability of error. Furthermore, we characterize the class of MDPs for which identification algorithms can stop in finite time. That class corresponds to the MDPs with a unique Bellman optimal policy, and does not depend on the optimality order considered. Lastly, we provide a tractable stopping rule that when coupled to our learning algorithm triggers in finite time whenever it is possible to do so.