🤖 AI Summary
To address catastrophic forgetting—the fundamental trade-off between plasticity and stability in continual reinforcement learning—this paper proposes the SSDE framework. Methodologically, SSDE integrates structured sparse coding, task-adaptive parameter modulation, and a sparse policy-network inference–retraining闭环. Its key contributions are: (1) a novel fine-grained structured sparse parameter co-allocation mechanism that enables efficient network compression while facilitating cross-task knowledge sharing; and (2) a sensitivity-driven dormant neuron reactivation strategy that dynamically balances parameter freezing and unfreezing, thereby enhancing exploratory capability and cross-task transferability. Evaluated on the CW10-v1 benchmark, SSDE achieves a 95% task success rate—substantially outperforming state-of-the-art methods—and demonstrates synergistic optimization of high plasticity and strong stability.
📝 Abstract
Continual Reinforcement Learning (CRL) is essential for developing agents that can learn, adapt, and accumulate knowledge over time. However, a fundamental challenge persists as agents must strike a delicate balance between plasticity, which enables rapid skill acquisition, and stability, which ensures long-term knowledge retention while preventing catastrophic forgetting. In this paper, we introduce SSDE, a novel structure-based approach that enhances plasticity through a fine-grained allocation strategy with Structured Sparsity and Dormant-guided Exploration. SSDE decomposes the parameter space into forward-transfer (frozen) parameters and task-specific (trainable) parameters. Crucially, these parameters are allocated by an efficient co-allocation scheme under sparse coding, ensuring sufficient trainable capacity for new tasks while promoting efficient forward transfer through frozen parameters. However, structure-based methods often suffer from rigidity due to the accumulation of non-trainable parameters, limiting exploration and adaptability. To address this, we further introduce a sensitivity-guided neuron reactivation mechanism that systematically identifies and resets dormant neurons, which exhibit minimal influence in the sparse policy network during inference. This approach effectively enhance exploration while preserving structural efficiency. Extensive experiments on the CW10-v1 Continual World benchmark demonstrate that SSDE achieves state-of-the-art performance, reaching a success rate of 95%, surpassing prior methods significantly in both plasticity and stability trade-offs (code is available at: https://github.com/chengqiArchy/SSDE).