🤖 AI Summary
Average-reward reinforcement learning suffers from difficulties in objective computation and policy optimization due to the limitations of conventional discounted formulations.
Method: This paper proposes a linearized solution framework based on eigenvector decomposition of feature matrices, transcending traditional discounting assumptions. It is the first to extend large-deviation-theory-driven matrix eigenvector methods to function approximation settings, unifying the theoretical characterization of discounted, average-reward, and entropy-regularized objectives. The approach integrates neural network-based function approximation with posterior policy iteration—without explicit regularization—to decouple policy updates from regularization effects.
Results: Evaluated on classic control benchmarks, the algorithm achieves faster convergence, enhanced training stability, significantly improved accuracy in average-reward estimation, and superior final policy performance compared to state-of-the-art baselines.
📝 Abstract
In reinforcement learning, two objective functions have been developed extensively in the literature: discounted and averaged rewards. The generalization to an entropy-regularized setting has led to improved robustness and exploration for both of these objectives. Recently, the entropy-regularized average-reward problem was addressed using tools from large deviation theory in the tabular setting. This method has the advantage of linearity, providing access to both the optimal policy and average reward-rate through properties of a single matrix. In this paper, we extend that framework to more general settings by developing approaches based on function approximation by neural networks. This formulation reveals new theoretical insights into the relationship between different objectives used in RL. Additionally, we combine our algorithm with a posterior policy iteration scheme, showing how our approach can also solve the average-reward RL problem without entropy-regularization. Using classic control benchmarks, we experimentally find that our method compares favorably with other algorithms in terms of stability and rate of convergence.