🤖 AI Summary
To address the high cost and safety risks of environment interaction in safe online reinforcement learning (RL), as well as poor quality and insufficient coverage of offline data, this paper proposes the first offline-to-online policy transfer framework tailored for safe RL. Our method integrates offline-safe policy initialization, online constrained optimization, Q-function calibration, and policy fine-tuning. Key innovations include: (1) a value pre-alignment mechanism to mitigate bias in offline Q-function estimation; and (2) an adaptive PID-based Lagrangian multiplier tuner that dynamically adjusts constraint penalty coefficients to resolve Lagrange multiplier mismatch. Evaluated across diverse safety-critical control tasks, our approach achieves substantial improvements—average cumulative reward increases by 23.6% and constraint satisfaction rate by 31.2%—outperforming state-of-the-art baselines while ensuring both sample efficiency and strong safety guarantees.
📝 Abstract
The high costs and risks involved in extensive environment interactions hinder the practical application of current online safe reinforcement learning (RL) methods. While offline safe RL addresses this by learning policies from static datasets, the performance therein is usually limited due to reliance on data quality and challenges with out-of-distribution (OOD) actions. Inspired by recent successes in offline-to-online (O2O) RL, it is crucial to explore whether offline safe RL can be leveraged to facilitate faster and safer online policy learning, a direction that has yet to be fully investigated. To fill this gap, we first demonstrate that naively applying existing O2O algorithms from standard RL would not work well in the safe RL setting due to two unique challenges: emph{erroneous Q-estimations}, resulted from offline-online objective mismatch and offline cost sparsity, and emph{Lagrangian mismatch}, resulted from difficulties in aligning Lagrange multipliers between offline and online policies. To address these challenges, we introduce extbf{Marvel}, a novel framework for O2O safe RL, comprising two key components that work in concert: emph{Value Pre-Alignment} to align the Q-functions with the underlying truth before online learning, and emph{Adaptive PID Control} to effectively adjust the Lagrange multipliers during online finetuning. Extensive experiments demonstrate that Marvel significantly outperforms existing baselines in both reward maximization and safety constraint satisfaction. By introducing the first policy-finetuning based framework for O2O safe RL, which is compatible with many offline and online safe RL methods, our work has the great potential to advance the field towards more efficient and practical safe RL solutions.