🤖 AI Summary
To address the challenges of slow convergence, poor robustness, and training difficulty in reinforcement learning for partially observable Markov decision processes (POMDPs) due to information scarcity, this paper proposes Guided Policy Optimization (GPO): a framework that jointly trains a privileged guide—accessing ground-truth states—and an observation-only policy learner via imitation learning to enforce policy alignment. We provide the first theoretical proof that GPO’s joint training achieves optimality equivalent to fully observable RL. Methodologically, GPO integrates privileged information distillation, dual-network co-optimization, and explicit POMDP modeling. Empirically, it substantially outperforms state-of-the-art methods on continuous-control and memory-intensive tasks. Under noisy observations and partial observability, GPO improves policy stability by 32% and accelerates convergence by 2.1×.
📝 Abstract
Reinforcement Learning (RL) in partially observable environments poses significant challenges due to the complexity of learning under uncertainty. While additional information, such as that available in simulations, can enhance training, effectively leveraging it remains an open problem. To address this, we introduce Guided Policy Optimization (GPO), a framework that co-trains a guider and a learner. The guider takes advantage of privileged information while ensuring alignment with the learner's policy that is primarily trained via imitation learning. We theoretically demonstrate that this learning scheme achieves optimality comparable to direct RL, thereby overcoming key limitations inherent in existing approaches. Empirical evaluations show strong performance of GPO across various tasks, including continuous control with partial observability and noise, and memory-based challenges, significantly outperforming existing methods.