Frozen Policy Iteration: Computationally Efficient RL under Linear $Q^π$ Realizability for Deterministic Dynamics

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Frozen Policy Iteration, the first provably efficient policy iteration algorithm in the online reinforcement learning setting under only the assumption of linear $Q^\pi$ realizability and without access to a simulator. Tailored for MDPs with stochastic initial states, stochastic rewards, and deterministic transitions, the method avoids redundant sampling by freezing the policy on sufficiently explored states and leveraging only high-confidence on-policy trajectories. By integrating linear $Q^\pi$ representation, a policy freezing mechanism, and data filtering, the algorithm achieves a regret bound of $\widetilde{O}(\sqrt{d^2 H^6 T})$, where $d$ is the feature dimension, $H$ the horizon, and $T$ the total number of rounds. The approach extends to Uniform-PAC guarantees and function classes with bounded Eluder dimension, and recovers the optimal rate for linear bandits when $H = 1$.

Technology Category

Application Category

📝 Abstract
We study computationally and statistically efficient reinforcement learning under the linear $Q^π$ realizability assumption, where any policy's $Q$-function is linear in a given state-action feature representation. Prior methods in this setting are either computationally intractable, or require (local) access to a simulator. In this paper, we propose a computationally efficient online RL algorithm, named Frozen Policy Iteration, under the linear $Q^π$ realizability setting that works for Markov Decision Processes (MDPs) with stochastic initial states, stochastic rewards and deterministic transitions. Our algorithm achieves a regret bound of $\widetilde{O}(\sqrt{d^2H^6T})$, where $d$ is the dimensionality of the feature space, $H$ is the horizon length, and $T$ is the total number of episodes. Our regret bound is optimal for linear (contextual) bandits which is a special case of our setting with $H = 1$. Existing policy iteration algorithms under the same setting heavily rely on repeatedly sampling the same state by access to the simulator, which is not implementable in the online setting with stochastic initial states studied in this paper. In contrast, our new algorithm circumvents this limitation by strategically using only high-confidence part of the trajectory data and freezing the policy for well-explored states, which ensures that all data used by our algorithm remains effectively on-policy during the whole course of learning. We further demonstrate the versatility of our approach by extending it to the Uniform-PAC setting and to function classes with bounded eluder dimension.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
linear Q^π realizability
deterministic dynamics
online RL
policy iteration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frozen Policy Iteration
linear Q^π realizability
online reinforcement learning
regret bound
deterministic dynamics
🔎 Similar Papers
No similar papers found.
Y
Yijing Ke
School of EECS, Peking University
Z
Zihan Zhang
Department of Computer Science and Engineering, HKUST
Ruosong Wang
Ruosong Wang
Assistant Professor, Peking University
reinforcement learning