Exploration by Random Reward Perturbation

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient policy diversity and susceptibility to local optima in reinforcement learning, this paper proposes Random Reward Perturbation (RRP): injecting zero-mean noise into environmental rewards to enhance exploration in a lightweight, plug-and-play manner. RRP establishes, for the first time, a theoretical connection between reward shaping and noise-driven exploration; it is additive, universally applicable across algorithms, and incurs no additional computational overhead. It integrates naturally with policy gradient methods (e.g., PPO, SAC) and entropy regularization. Theoretical analysis demonstrates that RRP improves state-action coverage and preserves convergence guarantees. Empirical results show that RRP significantly boosts sample efficiency, consistently escapes local optima in both sparse- and dense-reward tasks, and simultaneously enhances training stability and final performance.

Technology Category

Application Category

📝 Abstract
We introduce Random Reward Perturbation (RRP), a novel exploration strategy for reinforcement learning (RL). Our theoretical analyses demonstrate that adding zero-mean noise to environmental rewards effectively enhances policy diversity during training, thereby expanding the range of exploration. RRP is fully compatible with the action-perturbation-based exploration strategies, such as $epsilon$-greedy, stochastic policies, and entropy regularization, providing additive improvements to exploration effects. It is general, lightweight, and can be integrated into existing RL algorithms with minimal implementation effort and negligible computational overhead. RRP establishes a theoretical connection between reward shaping and noise-driven exploration, highlighting their complementary potential. Experiments show that RRP significantly boosts the performance of Proximal Policy Optimization and Soft Actor-Critic, achieving higher sample efficiency and escaping local optima across various tasks, under both sparse and dense reward scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhancing policy diversity via reward noise in RL
Combining reward and action perturbation for better exploration
Improving sample efficiency and escaping local optima in RL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random Reward Perturbation enhances policy diversity
Compatible with action-perturbation-based exploration strategies
Lightweight integration with existing RL algorithms
🔎 Similar Papers
No similar papers found.