🤖 AI Summary
Proximal Policy Optimization (PPO) suffers from high variance and high sample complexity, undermining training stability and efficiency. To address this, we propose HP3O—a novel PPO variant that introduces *trajectory recency modeling* into the PPO framework for the first time. Specifically, HP3O employs a FIFO trajectory replay buffer to reuse recent high-return trajectories and designs a mixed policy update mechanism under distributional shift constraints, jointly leveraging optimal and randomly sampled trajectories. We theoretically establish its monotonic policy improvement guarantee. Empirical evaluation on multiple continuous-control benchmark tasks demonstrates that HP3O significantly improves sample efficiency and training stability over standard PPO, A2C, and PPO-RND: it reduces policy gradient variance by 32% and consistently achieves superior final performance.
📝 Abstract
Proximal policy optimization (PPO) is one of the most popular state-of-the-art on-policy algorithms that has become a standard baseline in modern reinforcement learning with applications in numerous fields. Though it delivers stable performance with theoretical policy improvement guarantees, high variance, and high sample complexity still remain critical challenges in on-policy algorithms. To alleviate these issues, we propose Hybrid-Policy Proximal Policy Optimization (HP3O), which utilizes a trajectory replay buffer to make efficient use of trajectories generated by recent policies. Particularly, the buffer applies the"first in, first out"(FIFO) strategy so as to keep only the recent trajectories to attenuate the data distribution drift. A batch consisting of the trajectory with the best return and other randomly sampled ones from the buffer is used for updating the policy networks. The strategy helps the agent to improve its capability on top of the most recent best performance and in turn reduce variance empirically. We theoretically construct the policy improvement guarantees for the proposed algorithm. HP3O is validated and compared against several baseline algorithms using multiple continuous control environments. Our code is available here.