Enhancing PPO with Trajectory-Aware Hybrid Policies

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Proximal Policy Optimization (PPO) suffers from high variance and high sample complexity, undermining training stability and efficiency. To address this, we propose HP3O—a novel PPO variant that introduces *trajectory recency modeling* into the PPO framework for the first time. Specifically, HP3O employs a FIFO trajectory replay buffer to reuse recent high-return trajectories and designs a mixed policy update mechanism under distributional shift constraints, jointly leveraging optimal and randomly sampled trajectories. We theoretically establish its monotonic policy improvement guarantee. Empirical evaluation on multiple continuous-control benchmark tasks demonstrates that HP3O significantly improves sample efficiency and training stability over standard PPO, A2C, and PPO-RND: it reduces policy gradient variance by 32% and consistently achieves superior final performance.

Technology Category

Application Category

📝 Abstract
Proximal policy optimization (PPO) is one of the most popular state-of-the-art on-policy algorithms that has become a standard baseline in modern reinforcement learning with applications in numerous fields. Though it delivers stable performance with theoretical policy improvement guarantees, high variance, and high sample complexity still remain critical challenges in on-policy algorithms. To alleviate these issues, we propose Hybrid-Policy Proximal Policy Optimization (HP3O), which utilizes a trajectory replay buffer to make efficient use of trajectories generated by recent policies. Particularly, the buffer applies the"first in, first out"(FIFO) strategy so as to keep only the recent trajectories to attenuate the data distribution drift. A batch consisting of the trajectory with the best return and other randomly sampled ones from the buffer is used for updating the policy networks. The strategy helps the agent to improve its capability on top of the most recent best performance and in turn reduce variance empirically. We theoretically construct the policy improvement guarantees for the proposed algorithm. HP3O is validated and compared against several baseline algorithms using multiple continuous control environments. Our code is available here.
Problem

Research questions and friction points this paper is trying to address.

Reduces high variance in PPO
Decreases sample complexity
Improves data distribution drift
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory replay buffer
FIFO strategy
Hybrid-Policy PPO
🔎 Similar Papers
No similar papers found.
Qisai Liu
Qisai Liu
Iowa State University
Reinforcement LearningMachine Learning
Zhanhong Jiang
Zhanhong Jiang
Scientist at TrAC
Distributed optimizationMachine Learning
H
Hsin-Jung Yang
Department of Mechanical Engineering, Iowa State University, Ames, 50011, Iowa, United States
Mahsa Khosravi
Mahsa Khosravi
Iowa State University
Reinforcement LearningRoboticsComputer vision
Joshua R. Waite
Joshua R. Waite
Postdoctoral Research Associate, Translational AI Center, Iowa State University
Machine LearningDeep Learning
S
Soumik Sarkar
Department of Mechanical Engineering, Iowa State University, Ames, 50011, Iowa, United States; Department of Computer Science, Iowa State University, Ames, 50011, Iowa, United States; Translational AI Center, Iowa State University, Ames, 50011, Iowa, United States