🤖 AI Summary
This paper investigates the statistical efficiency of distributed temporal-difference learning with linear function approximation, addressing the sample-complexity bottleneck induced by large support sets of size $K$, and validating the central conjecture that “learning the return distribution is no harder than learning its expectation.” Methodologically, it introduces the first fine-grained analysis of the linear-categorical Bellman operator and incorporates variance reduction to eliminate dependence of sample complexity on $K$. Theoretically, it establishes a tight finite-sample convergence bound, proving that, under streaming data from a discounted MDP, distributional learning achieves the same statistical rate as classical value-function learning. The results show that, under mild regularization conditions, distributional reinforcement learning attains sample efficiency matching that of expectation-based learning—up to constant factors—thereby providing rigorous theoretical justification for scalable distribution estimation.
📝 Abstract
In this paper, we study the finite-sample statistical rates of distributional temporal difference (TD) learning with linear function approximation. The purpose of distributional TD learning is to estimate the return distribution of a discounted Markov decision process for a given policy. Previous works on statistical analysis of distributional TD learning focus mainly on the tabular case. We first consider the linear function approximation setting and conduct a fine-grained analysis of the linear-categorical Bellman equation. Building on this analysis, we further incorporate variance reduction techniques in our new algorithms to establish tight sample complexity bounds independent of the support size $K$ when $K$ is large. Our theoretical results imply that, when employing distributional TD learning with linear function approximation, learning the full distribution of the return function from streaming data is no more difficult than learning its expectation. This work provide new insights into the statistical efficiency of distributional reinforcement learning algorithms.