🤖 AI Summary
This work addresses the challenge of efficient exploration in reinforcement learning without external rewards by proposing an intrinsic average reward framework that eschews explicit rollouts. The approach reframes exploration as maximizing the entropy of the stationary state distribution. Its key innovation lies in leveraging the spectral properties of the entropy-regularized objective to directly compute the stationary distribution via the principal eigenvector of the transition matrix, thereby circumventing the reliance on rollouts and density estimation inherent in conventional methods. To handle non-regularized objectives, the authors introduce Posterior Policy Iteration (PPI), which guarantees monotonic entropy improvement and convergence. The resulting Eigenvector-based Exploration (EVE) algorithm achieves high stationary entropy policies in deterministic grid environments, matching the exploration performance of existing baselines while substantially reducing computational overhead.
📝 Abstract
Efficient exploration remains a central challenge in reinforcement learning, serving as a useful pretraining objective for data collection, particularly when an external reward function is unavailable. A principled formulation of the exploration problem is to find policies that maximize the entropy of their induced steady-state visitation distribution, thereby encouraging uniform long-run coverage of the state space. Many existing exploration approaches require estimating state visitation frequencies through repeated on-policy rollouts, which can be computationally expensive. In this work, we instead consider an intrinsic average-reward formulation in which the reward is derived from the visitation distribution itself, so that the optimal policy maximizes steady-state entropy. An entropy-regularized version of this objective admits a spectral characterization: the relevant stationary distributions can be computed from the dominant eigenvectors of a problem-dependent transition matrix. This insight leads to a novel algorithm for solving the maximum entropy exploration problem, EVE (EigenVector-based Exploration), which avoids explicit rollouts and distribution estimation, instead computing the solution through iterative updates, similar to a value-based approach. To address the original unregularized objective, we employ a posterior-policy iteration (PPI) approach, which monotonically improves the entropy and converges in value. We prove convergence of EVE under standard assumptions and demonstrate empirically that it efficiently produces policies with high steady-state entropy, achieving competitive exploration performance relative to rollout-based baselines in deterministic grid-world environments.