Rethinking the Role of Dynamic Sparse Training for Scalable Deep Reinforcement Learning

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Scaling deep reinforcement learning (DRL) models often degrades performance due to unique optimization challenges—particularly plasticity loss—exacerbated by dense parameter growth. Existing dynamic sparse training methods suffer from three key limitations: (1) treating encoder, policy, and value networks identically despite their distinct learning dynamics; (2) failing to disentangle the contributions of dynamic sparsification from architectural improvements; and (3) lacking systematic comparison across sparsity transition paradigms (sparse-to-sparse, dense-to-sparse, sparse-to-dense). Method: We propose Modular Sparse Training (MST), a framework that tailors dynamic topology adaptation strategies to individual network modules. Contribution/Results: MST is the first to rigorously demonstrate the complementarity between sparse training and modern DRL architectures. Evaluated across multiple DRL algorithms, MST significantly improves training stability and scalability without altering algorithmic structure, delivering consistent performance gains.

Technology Category

Application Category

📝 Abstract
Scaling neural networks has driven breakthrough advances in machine learning, yet this paradigm fails in deep reinforcement learning (DRL), where larger models often degrade performance due to unique optimization pathologies such as plasticity loss. While recent works show that dynamically adapting network topology during training can mitigate these issues, existing studies have three critical limitations: (1) applying uniform dynamic training strategies across all modules despite encoder, critic, and actor following distinct learning paradigms, (2) focusing evaluation on basic architectures without clarifying the relative importance and interaction between dynamic training and architectural improvements, and (3) lacking systematic comparison between different dynamic approaches including sparse-to-sparse, dense-to-sparse, and sparse-to-dense. Through comprehensive investigation across modules and architectures, we reveal that dynamic sparse training strategies provide module-specific benefits that complement the primary scalability foundation established by architectural improvements. We finally distill these insights into Module-Specific Training (MST), a practical framework that further exploits the benefits of architectural improvements and demonstrates substantial scalability gains across diverse RL algorithms without algorithmic modifications.
Problem

Research questions and friction points this paper is trying to address.

Dynamic sparse training addresses module-specific learning issues in DRL
It clarifies interactions between training strategies and architectural improvements
It compares different dynamic approaches for scalable reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sparse training adapts network topology during training
Module-Specific Training framework provides customized strategies per module
MST enhances scalability without modifying RL algorithms
🔎 Similar Papers
No similar papers found.