🤖 AI Summary
In distributed reinforcement learning, parameterized distribution approximations of the true return distribution introduce substantial inductive bias, degrading generalization and undermining the reliability of uncertainty estimation. To address this, we propose the Diverse Projection Ensemble (DPE), a framework that integrates multiple Wasserstein-distance-compatible projection operators with parameterized distribution representations. We theoretically characterize how projection bias fundamentally affects generalization. Furthermore, DPE couples ensemble disagreement with an exploration reward derived from the 1-Wasserstein distance, yielding an uncertainty-aware deep exploration mechanism. Evaluated on the Behavior Suite and VizDoom benchmarks, DPE significantly outperforms state-of-the-art methods—particularly excelling in directed exploration tasks. These results empirically validate that diversity across projections is critical for robust exploration and reliable uncertainty estimation.
📝 Abstract
In contrast to classical reinforcement learning (RL), distributional RL algorithms aim to learn the distribution of returns rather than their expected value. Since the nature of the return distribution is generally unknown a priori or arbitrarily complex, a common approach finds approximations within a set of representable, parametric distributions. Typically, this involves a projection of the unconstrained distribution onto the set of simplified distributions. We argue that this projection step entails a strong inductive bias when coupled with neural networks and gradient descent, thereby profoundly impacting the generalization behavior of learned models. In order to facilitate reliable uncertainty estimation through diversity, we study the combination of several different projections and representations in a distributional ensemble. We establish theoretical properties of such projection ensembles and derive an algorithm that uses ensemble disagreement, measured by the average 1-Wasserstein distance, as a bonus for deep exploration. We evaluate our algorithm on the behavior suite benchmark and VizDoom and find that diverse projection ensembles lead to significant performance improvements over existing methods on a variety of tasks with the most pronounced gains in directed exploration problems.