Revisiting Entropy in Reinforcement Learning for Large Reasoning Models

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reinforcement learning (RL) with large language models (LLMs), premature convergence to suboptimal policies—termed *entropy collapse*—degrades reasoning capability, response diversity, and probability calibration. This work identifies *positive-advantage tokens* as the primary driver of entropy collapse and demonstrates that off-policy update frequency, data diversity, and optimization clipping thresholds critically govern entropy dynamics. To address this, we propose an *advantage-aware weighted loss function* that explicitly controls entropy by differentially scaling gradient weights for positive- versus negative-advantage tokens. Our method integrates off-policy updates with a verifiable reward mechanism. Evaluated across multiple benchmarks—including mathematical reasoning and code generation—it significantly mitigates entropy collapse, improves response diversity by +23.6%, and boosts task performance by an average of +5.8%. The approach yields a reproducible, plug-and-play RL training paradigm for LLMs.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged as a predominant approach for enhancing the reasoning capabilities of large language models (LLMs). However, the entropy of LLMs usually collapses during RLVR training, causing premature convergence to suboptimal local minima and hinder further performance improvement. Although various approaches have been proposed to mitigate entropy collapse, a comprehensive study of entropy in RLVR remains lacking. To address this gap, we conduct extensive experiments to investigate the entropy dynamics of LLMs trained with RLVR and analyze how model entropy correlates with response diversity, calibration, and performance across various benchmarks. Our findings reveal that the number of off-policy updates, the diversity of training data, and the clipping thresholds in the optimization objective are critical factors influencing the entropy of LLMs trained with RLVR. Moreover, we theoretically and empirically demonstrate that tokens with positive advantages are the primary contributors to entropy collapse, and that model entropy can be effectively regulated by adjusting the relative loss weights of tokens with positive and negative advantages during training.
Problem

Research questions and friction points this paper is trying to address.

Entropy collapse in large language models during reinforcement learning training
Premature convergence to suboptimal local minima hindering performance improvement
Lack of comprehensive understanding of entropy dynamics in RLVR training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes entropy dynamics in reinforcement learning with verifiable rewards
Identifies critical factors influencing model entropy during training
Regulates entropy by adjusting loss weights for tokens with advantages
🔎 Similar Papers
No similar papers found.
Renren Jin
Renren Jin
College of Intelligence and Computing, Tianjin University
Natural Language Processing
Pengzhi Gao
Pengzhi Gao
Xiaomi LLM Team
Machine LearningNatural Language ProcessingHigh Dimensional DataSignal Processing
Y
Yuqi Ren
School of Computer Science and Technology, Tianjin University
Z
Zhuowen Han
School of Computer Science and Technology, Tianjin University
T
Tongxuan Zhang
College of Computer and Information Engineering, Tianjin Normal University
W
Wuwei Huang
Unaffiliated
W
Wei Liu
Unaffiliated
Jian Luan
Jian Luan
Toshiba, Microsoft, Xiaomi
LLMVLMTTSSinging Synthesis
Deyi Xiong
Deyi Xiong
Professor, College of Intelligence and Computing, Tianjin University, China
Natural Language ProcessingLarge Language ModelsAI4Science