Policy Gradient with Second Order Momentum

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the low sample efficiency, high variance, and unstable convergence of first-order policy optimization methods in reinforcement learning. We propose PG-SOM, a lightweight second-order optimization algorithm. Its core innovation is the first integration of an unbiased, positive-definite diagonal Hessian approximation into the REINFORCE update, coupled with exponential moving averaging of first-order gradients to enable gradient preconditioning; we theoretically prove that this preconditioned direction constitutes an expected descent direction. PG-SOM incurs only O(D) memory overhead (where D is the parameter dimension), yet effectively incorporates curvature information. On standard control benchmarks, PG-SOM improves sample efficiency by up to 2.1× over first-order baselines, significantly reduces policy evaluation variance, and outperforms Fisher-based methods and other state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
We develop Policy Gradient with Second-Order Momentum (PG-SOM), a lightweight second-order optimisation scheme for reinforcement-learning policies. PG-SOM augments the classical REINFORCE update with two exponentially weighted statistics: a first-order gradient average and a diagonal approximation of the Hessian. By preconditioning the gradient with this curvature estimate, the method adaptively rescales each parameter, yielding faster and more stable ascent of the expected return. We provide a concise derivation, establish that the diagonal Hessian estimator is unbiased and positive-definite under mild regularity assumptions, and prove that the resulting update is a descent direction in expectation. Numerical experiments on standard control benchmarks show up to a 2.1x increase in sample efficiency and a substantial reduction in variance compared to first-order and Fisher-matrix baselines. These results indicate that even coarse second-order information can deliver significant practical gains while incurring only D memory overhead for a D-parameter policy. All code and reproducibility scripts will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Develops PG-SOM for faster reinforcement-learning policy optimization
Uses second-order momentum to stabilize and accelerate gradient ascent
Improves sample efficiency and reduces variance in control benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight second-order optimization for reinforcement learning
Diagonal Hessian approximation for adaptive gradient rescaling
Unbiased positive-definite curvature estimator for stable ascent
🔎 Similar Papers
No similar papers found.