Model-Based Offline Reinforcement Learning with Adversarial Data Augmentation

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning suffers from poor policy generalization and large model extrapolation errors due to static, fixed datasets. To address these issues, this paper proposes a model-based framework incorporating adversarial data augmentation. Instead of relying on fixed-horizon rollouts, the method employs an ensemble of environment models to alternately sample dynamic adversarial trajectories. It further introduces a novel dynamic model selection mechanism coupled with differential regularization, jointly enhancing extrapolation robustness and ensuring policy conservatism—without requiring manual tuning of rollout horizon. Evaluated on the D4RL benchmark, the approach consistently outperforms existing model-based offline RL algorithms, achieving state-of-the-art performance in both policy effectiveness and sample efficiency.

Technology Category

Application Category

📝 Abstract
Model-based offline Reinforcement Learning (RL) constructs environment models from offline datasets to perform conservative policy optimization. Existing approaches focus on learning state transitions through ensemble models, rollouting conservative estimation to mitigate extrapolation errors. However, the static data makes it challenging to develop a robust policy, and offline agents cannot access the environment to gather new data. To address these challenges, we introduce Model-based Offline Reinforcement learning with AdversariaL data augmentation (MORAL). In MORAL, we replace the fixed horizon rollout by employing adversaria data augmentation to execute alternating sampling with ensemble models to enrich training data. Specifically, this adversarial process dynamically selects ensemble models against policy for biased sampling, mitigating the optimistic estimation of fixed models, thus robustly expanding the training data for policy optimization. Moreover, a differential factor is integrated into the adversarial process for regularization, ensuring error minimization in extrapolations. This data-augmented optimization adapts to diverse offline tasks without rollout horizon tuning, showing remarkable applicability. Extensive experiments on D4RL benchmark demonstrate that MORAL outperforms other model-based offline RL methods in terms of policy learning and sample efficiency.
Problem

Research questions and friction points this paper is trying to address.

Mitigates extrapolation errors in offline RL with adversarial data augmentation
Enhances policy robustness by dynamically selecting ensemble models
Improves sample efficiency without requiring rollout horizon tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial data augmentation enriches training data
Dynamic ensemble model selection mitigates optimistic estimation
Differential factor regularization minimizes extrapolation errors
🔎 Similar Papers
No similar papers found.
Hongye Cao
Hongye Cao
Chang'an University
Remote sensing
F
Fan Feng
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
Jing Huo
Jing Huo
Nanjing University
Machine LearningComputer Vision
Shangdong Yang
Shangdong Yang
Nanjing University of Posts and Telecommunications
Reinforcement LearningMulti-agent SystemsMulti-armed Bandits
Meng Fang
Meng Fang
University of Liverpool
Natural Language ProcessingReinforcement LearningAgentsArtificial intelligence
T
Tianpei Yang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
Y
Yang Gao
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China