🤖 AI Summary
Offline reinforcement learning suffers from poor policy generalization and large model extrapolation errors due to static, fixed datasets. To address these issues, this paper proposes a model-based framework incorporating adversarial data augmentation. Instead of relying on fixed-horizon rollouts, the method employs an ensemble of environment models to alternately sample dynamic adversarial trajectories. It further introduces a novel dynamic model selection mechanism coupled with differential regularization, jointly enhancing extrapolation robustness and ensuring policy conservatism—without requiring manual tuning of rollout horizon. Evaluated on the D4RL benchmark, the approach consistently outperforms existing model-based offline RL algorithms, achieving state-of-the-art performance in both policy effectiveness and sample efficiency.
📝 Abstract
Model-based offline Reinforcement Learning (RL) constructs environment models from offline datasets to perform conservative policy optimization. Existing approaches focus on learning state transitions through ensemble models, rollouting conservative estimation to mitigate extrapolation errors. However, the static data makes it challenging to develop a robust policy, and offline agents cannot access the environment to gather new data. To address these challenges, we introduce Model-based Offline Reinforcement learning with AdversariaL data augmentation (MORAL). In MORAL, we replace the fixed horizon rollout by employing adversaria data augmentation to execute alternating sampling with ensemble models to enrich training data. Specifically, this adversarial process dynamically selects ensemble models against policy for biased sampling, mitigating the optimistic estimation of fixed models, thus robustly expanding the training data for policy optimization. Moreover, a differential factor is integrated into the adversarial process for regularization, ensuring error minimization in extrapolations. This data-augmented optimization adapts to diverse offline tasks without rollout horizon tuning, showing remarkable applicability. Extensive experiments on D4RL benchmark demonstrate that MORAL outperforms other model-based offline RL methods in terms of policy learning and sample efficiency.