Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning (RL) for automated bidding faces two key challenges: limited state coverage in real-world datasets and environmental mismatch in simulator-generated data. To address these, we propose Model-based Reinforcement Learning for Bidding (MRLB), a novel offline RL framework that jointly trains policies using both real data and high-fidelity environment-model-synthesized data. Methodologically, MRLB introduces a permutation-equivariant neural network to model environment dynamics, ensuring symmetric generalization of state representations; it further designs a pessimism-constrained offline Q-learning algorithm that explicitly mitigates error propagation from model inaccuracies, thereby enhancing policy robustness. Evaluated on real-world advertising bidding tasks, MRLB substantially outperforms existing state-of-the-art methods—demonstrating significant improvements in policy performance, training stability, and cross-scenario generalization capability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) for auto-bidding has shifted from using simplistic offline simulators (Simulation-based RL Bidding, SRLB) to offline RL on fixed real datasets (Offline RL Bidding, ORLB). However, ORLB policies are limited by the dataset's state space coverage, offering modest gains. While SRLB expands state coverage, its simulator-reality gap risks misleading policies. This paper introduces Model-based RL Bidding (MRLB), which learns an environment model from real data to bridge this gap. MRLB trains policies using both real and model-generated data, expanding state coverage beyond ORLB. To ensure model reliability, we propose: 1) A permutation equivariant model architecture for better generalization, and 2) A robust offline Q-learning method that pessimistically penalizes model errors. These form the Permutation Equivariant Model-based Offline RL (PE-MORL) algorithm. Real-world experiments show that PE-MORL outperforms state-of-the-art auto-bidding methods.
Problem

Research questions and friction points this paper is trying to address.

Bridges simulator-reality gap in auto-bidding RL
Expands state coverage beyond fixed datasets
Ensures model reliability with equivariant architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based RL bridging simulator-reality gap
Permutation equivariant model for generalization
Pessimistic Q-learning penalizing model errors
🔎 Similar Papers
No similar papers found.
Zhiyu Mou
Zhiyu Mou
M.S. student at Tsinghua University
machine learningnetwork intelligencereinforcement learninggraph neural network
M
Miao Xu
Alibaba Group, Beijing, China
W
Wei Chen
Alibaba Group, Beijing, China
R
Rongquan Bai
Alibaba Group, Beijing, China
C
Chuan Yu
Alibaba Group, Beijing, China
J
Jian Xu
Alibaba Group, Beijing, China