🤖 AI Summary
Self-teaching reasoning suffers from performance degradation due to an imbalance between exploration (response diversity) and exploitation (reward discriminability) during iterative self-improvement. Method: We propose the first iterative optimization framework capable of dynamically monitoring and regulating this balance. We quantitatively identify, for the first time, the synchronous decay of both exploration and exploitation throughout self-improvement; then, leveraging real-time policy quality and reward confidence, we adaptively adjust sampling temperature/Top-k and candidate filtering weights, and introduce multi-round self-distillation. Contribution/Results: Integrating mathematical modeling with dynamic optimization, our method significantly outperforms baselines—including STaR—on mathematical reasoning, code generation, and commonsense reasoning tasks, while simultaneously sustaining high response diversity and high reward discrimination accuracy throughout training.
📝 Abstract
In the absence of extensive human-annotated data for complex reasoning tasks, self-improvement -- where models are trained on their own outputs -- has emerged as a primary method for enhancing performance. However, the critical factors underlying the mechanism of these iterative self-improving methods remain poorly understood, such as under what conditions self-improvement is effective, and what are the bottlenecks in the current iterations. In this work, we identify and propose methods to monitor two pivotal factors in this iterative process: (1) the model's ability to generate sufficiently diverse responses (exploration); and (2) the effectiveness of external rewards in distinguishing high-quality candidates from lower-quality ones (exploitation). Using mathematical reasoning as a case study, we begin with a quantitative analysis to track the dynamics of exploration and exploitation, discovering that a model's exploratory capabilities rapidly deteriorate over iterations, and the effectiveness of exploiting external rewards diminishes as well. Motivated by these findings, we introduce B-STaR, a Self-Taught Reasoning framework that autonomously adjusts configurations across iterations to Balance exploration and exploitation, thereby optimizing the self-improving effectiveness based on the current policy model and available rewards. Our experiments on mathematical reasoning, coding, and commonsense reasoning demonstrate that B-STaR not only enhances the model's exploratory capabilities throughout training but also achieves a more effective balance between exploration and exploitation, leading to superior performance.