GIPO: Gaussian Importance Sampling Policy Optimization

๐Ÿ“… 2026-03-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the data inefficiency commonly encountered in reinforcement learning during post-training phases, where interaction data are scarce and prone to becoming outdated. The authors propose a novel policy optimization objective based on truncated importance sampling, innovatively incorporating log-ratio Gaussian trust weights to softly suppress extreme importance ratios while preserving non-zero gradients. By replacing hard truncation with an adjustable implicit update constraint, the method achieves a favorable balance between stability and robustness under limited sample budgets. Theoretical analysis grounded in concentration inequalities demonstrates improved bias-variance trade-offs, and empirical evaluations across varying replay buffer sizes consistently show enhanced training stability and sample efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Post-training with reinforcement learning (RL) has recently shown strong promise for advancing multimodal agents beyond supervised imitation. However, RL remains limited by poor data efficiency, particularly in settings where interaction data are scarce and quickly become outdated. To address this challenge, GIPO (Gaussian Importance sampling Policy Optimization) is proposed as a policy optimization objective based on truncated importance sampling, replacing hard clipping with a log-ratio-based Gaussian trust weight to softly damp extreme importance ratios while maintaining non-zero gradients. Theoretical analysis shows that GIPO introduces an implicit, tunable constraint on the update magnitude, while concentration bounds guarantee robustness and stability under finite-sample estimation. Experimental results show that GIPO achieves state-of-the-art performance among clipping-based baselines across a wide range of replay buffer sizes, from near on-policy to highly stale data, while exhibiting superior bias--variance trade-off, high training stability and improved sample efficiency.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
data efficiency
policy optimization
off-policy learning
sample efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Importance Sampling
Policy Optimization
Truncated Importance Sampling
Reinforcement Learning
Sample Efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Chengxuan Lu
Wolf 1069B, Sany Group, Hangzhou, China
Z
Zhenquan Zhang
Wolf 1069B, Sany Group, Hangzhou, China
S
Shukuan Wang
Wolf 1069B, Sany Group, Hangzhou, China
Q
Qunzhi Lin
Wolf 1069B, Sany Group, Hangzhou, China
Baigui Sun
Baigui Sun
Wolf 1069 b Lab, Sany Group
ไบบๅทฅๆ™บ่ƒฝใ€่ฎก็ฎ—ๆœบ่ง†่ง‰
Yang Liu
Yang Liu
Zhejiang University
MultimediaData mining