Efficient On-Policy Reinforcement Learning via Exploration of Sparse Parameter Space

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing policy gradient methods (e.g., PPO, TRPO) perform parameter updates solely along a single stochastic gradient direction, neglecting local geometric structure in parameter space and thus often converging to suboptimal policies. To address this, we propose ExploRLerβ€”a plug-and-play local exploration enhancement framework that, without increasing the number of gradient updates, models the local geometry around policy checkpoints and systematically explores high-return regions within the current update neighborhood. ExploRLer is fully compatible with mainstream on-policy algorithms and requires no modification to existing training pipelines. Empirically, it significantly improves both convergence speed and final performance across multiple challenging continuous-control benchmarks. These results demonstrate that explicitly modeling and leveraging local parameter-space geometry is both effective and essential for optimizing reinforcement learning policies.

Technology Category

Application Category

πŸ“ Abstract
Policy-gradient methods such as Proximal Policy Optimization (PPO) are typically updated along a single stochastic gradient direction, leaving the rich local structure of the parameter space unexplored. Previous work has shown that the surrogate gradient is often poorly correlated with the true reward landscape. Building on this insight, we visualize the parameter space spanned by policy checkpoints within an iteration and reveal that higher performing solutions often lie in nearby unexplored regions. To exploit this opportunity, we introduce ExploRLer, a pluggable pipeline that seamlessly integrates with on-policy algorithms such as PPO and TRPO, systematically probing the unexplored neighborhoods of surrogate on-policy gradient updates. Without increasing the number of gradient updates, ExploRLer achieves significant improvements over baselines in complex continuous control environments. Our results demonstrate that iteration-level exploration provides a practical and effective way to strengthen on-policy reinforcement learning and offer a fresh perspective on the limitations of the surrogate objective.
Problem

Research questions and friction points this paper is trying to address.

Explores sparse parameter space to improve policy-gradient reinforcement learning efficiency
Addresses poor correlation between surrogate gradients and true reward landscapes
Enhances on-policy algorithms by systematically probing unexplored parameter neighborhoods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores nearby parameter neighborhoods for better solutions
Pluggable pipeline integrates with on-policy algorithms like PPO
Systematically probes unexplored regions without extra gradient updates
πŸ”Ž Similar Papers
No similar papers found.
X
Xinyu Zhang
Department of Computer Science, Stony Brook University
A
Aishik Deb
Department of Computer Science, Stony Brook University
Klaus Mueller
Klaus Mueller
Professor of Computer Science, Stony Brook University
VisualizationVisual AnalyticsData ScienceExplainable AIMedical Imaging