GTPO: Trajectory-Based Policy Optimization in Large Language Models

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GRPO suffers from two critical flaws in language model alignment: (i) high-frequency, structurally critical tokens in reward-opposite completions are erroneously suppressed due to gradient conflicts, degrading syntactic and logical integrity; and (ii) negative rewards penalize high-confidence responses, inducing output distribution flattening and policy degradation. To address these, we propose GTPO—a reference-free, trajectory-level policy optimization method. GTPO introduces a dynamic conflict-token identification and protection mechanism that skips gradient updates for negatively rewarded tokens while amplifying updates for positively rewarded ones, coupled with entropy-thresholded response filtering to prevent policy collapse. Crucially, GTPO eliminates both KL regularization and reliance on a reference model. Empirically, it achieves significant improvements over GRPO on GSM8K, MATH, and AIME 2024, with enhanced training stability, superior final performance, and effective mitigation of distribution flattening.

Technology Category

Application Category

📝 Abstract
Policy-based optimizations are widely adopted today for the training and alignment of language models, where one of the most recent and effective approaches is Group-relative Policy Optimization (GRPO). In this paper, we reveals and analyze two major limitations of GRPO: (i) tokens frequently appear in completions with both positive and negative rewards, leading to conflicting gradient updates that can reduce their output probability, even though can be essential for maintaining proper structure; (ii) negatively rewarded completions may penalize confident responses and shift model decisions toward unlikely tokens, progressively flattening the output distribution and degrading learning. To address these issues and provide a more stable and effective policy optimization strategy, we introduce GTPO (Group-relative Trajectory-based Policy Optimization), which identifies conflict tokens, tokens appearing in the same position across completions with opposite rewards, protects them by skipping negative updates, while amplifying positive ones. To further prevent policy collapse, GTPO filters out completions whose entropy exceeds a provable threshold. Unlike GRPO, GTPO does not rely on KL-divergence regularization, eliminating the need for a reference model during training, while still ensuring greater training stability and improved performance, validated through multiple experiments on GSM8K, MATH and AIME 2024 benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Resolves conflicting gradient updates for tokens with mixed rewards
Prevents penalizing confident responses in negatively rewarded completions
Eliminates need for KL-divergence regularization and reference model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Protects conflict tokens from negative updates
Filters high-entropy completions to prevent collapse
Eliminates need for reference model in training
🔎 Similar Papers
No similar papers found.
M
Marco Simoni
Institute of Informatics and Telematics, National Research Council of Italy, Via G. Moruzzi 1, 56124 Pisa, Italy; National Doctorate on Artificial Intelligence, Sapienza Università di Roma, Piazza Aldo Moro 5, 00185 Roma, Italy
A
Aleksandar Fontana
Institute of Informatics and Telematics, National Research Council of Italy, Via G. Moruzzi 1, 56124 Pisa, Italy; Department of Excellence in Robotics and AI, TeCIP, Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56127 Pisa, Italy
Giulio Rossolini
Giulio Rossolini
Scuola Superiore Sant'Anna
Trustworthy AISafe and Secure AIComputer VisionLLMs
Andrea Saracino
Andrea Saracino
Associate Professor at Scuola Superiore Sant'Anna
Mobile SecurityNetwork SecurityDistributed SystemsTrust