Dynamic Learning Rate for Deep Reinforcement Learning: A Bandit Approach

๐Ÿ“… 2024-10-16
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address slow and unstable convergence caused by fixed learning rates in non-stationary environments within deep reinforcement learning, this paper proposes a dynamic learning rate adaptation mechanism based on the Upper Confidence Bound (UCB) multi-armed bandit framework. This is the first work to integrate an online bandit decision-making paradigm into RL learning rate optimization, enabling autonomous selection of the optimal learning rate according to real-time cumulative policy returnsโ€”thereby eliminating reliance on hand-crafted, prior-knowledge-dependent decay schedules. The method is algorithm-agnostic and seamlessly integrates with mainstream RL algorithms such as DQN and PPO, augmented with meta-learning and online performance feedback mechanisms. Evaluated across multiple standard benchmark tasks, it achieves an average 37% acceleration in convergence speed and improves peak return by 12โ€“28%, significantly enhancing both sample efficiency and asymptotic policy performance.

Technology Category

Application Category

๐Ÿ“ Abstract
In deep Reinforcement Learning (RL) models trained using gradient-based techniques, the choice of optimizer and its learning rate are crucial to achieving good performance: higher learning rates can prevent the model from learning effectively, while lower ones might slow convergence. Additionally, due to the non-stationarity of the objective function, the best-performing learning rate can change over the training steps. To adapt the learning rate, a standard technique consists of using decay schedulers. However, these schedulers assume that the model is progressively approaching convergence, which may not always be true, leading to delayed or premature adjustments. In this work, we propose dynamic Learning Rate for deep Reinforcement Learning (LRRL), a meta-learning approach that selects the learning rate based on the agent's performance during training. LRRL is based on a multi-armed bandit algorithm, where each arm represents a different learning rate, and the bandit feedback is provided by the cumulative returns of the RL policy to update the arms' probability distribution. Our empirical results demonstrate that LRRL can substantially improve the performance of deep RL algorithms for some tasks.
Problem

Research questions and friction points this paper is trying to address.

Dynamic Learning Rate
Deep Reinforcement Learning
Adaptive Learning Environment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Learning Rate
Deep Reinforcement Learning
Multi-Armed Bandit
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Henrique Donancio
Univ. Grenoble Alpes, Inria, CNRS, Grenoble, France
A
Antoine Barrier
Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, Grenoble, France
L
Leah F. South
School of Mathematical Sciences and Centre for Data Science, Queensland University of Technology, Brisbane, Australia
Florence Forbes
Florence Forbes
Director of Research, INRIA, Grenoble Rhone-Alpes
StatisticsBayesian image processingClustering techniquesMarkov random fieldsMixture models