🤖 AI Summary
Offline reinforcement learning (RL) for automated bidding faces two key challenges: limited state coverage in real-world datasets and environmental mismatch in simulator-generated data. To address these, we propose Model-based Reinforcement Learning for Bidding (MRLB), a novel offline RL framework that jointly trains policies using both real data and high-fidelity environment-model-synthesized data. Methodologically, MRLB introduces a permutation-equivariant neural network to model environment dynamics, ensuring symmetric generalization of state representations; it further designs a pessimism-constrained offline Q-learning algorithm that explicitly mitigates error propagation from model inaccuracies, thereby enhancing policy robustness. Evaluated on real-world advertising bidding tasks, MRLB substantially outperforms existing state-of-the-art methods—demonstrating significant improvements in policy performance, training stability, and cross-scenario generalization capability.
📝 Abstract
Reinforcement learning (RL) for auto-bidding has shifted from using simplistic offline simulators (Simulation-based RL Bidding, SRLB) to offline RL on fixed real datasets (Offline RL Bidding, ORLB). However, ORLB policies are limited by the dataset's state space coverage, offering modest gains. While SRLB expands state coverage, its simulator-reality gap risks misleading policies. This paper introduces Model-based RL Bidding (MRLB), which learns an environment model from real data to bridge this gap. MRLB trains policies using both real and model-generated data, expanding state coverage beyond ORLB. To ensure model reliability, we propose: 1) A permutation equivariant model architecture for better generalization, and 2) A robust offline Q-learning method that pessimistically penalizes model errors. These form the Permutation Equivariant Model-based Offline RL (PE-MORL) algorithm. Real-world experiments show that PE-MORL outperforms state-of-the-art auto-bidding methods.