🤖 AI Summary
Existing methods for enhancing LLM reasoning over-rely on human- or strong-model-provided intermediate step annotations, leading to homogeneous reasoning paths and suppressing non-human-like diverse reasoning. To address this, we propose Confidence-Guided Progressive Preference Optimization (CGPO): the first method to dynamically identify the most uncertain node in a reasoning chain using the model’s own confidence estimates, and inject self-generated, non-human-like reasoning guidance precisely at that node—thereby avoiding path deviation. CGPO integrates confidence-aware evaluation, self-constructed reasoning paths, and progressive preference optimization, eliminating the need for human or strong-model annotations. Experiments on code and mathematical reasoning tasks demonstrate that, under identical training budgets, CGPO trained solely on data generated by small models surpasses both strong-model- and human-annotated baselines, achieving significant gains in reasoning accuracy and out-of-distribution generalization.
📝 Abstract
Current approaches for strengthening LLM reasoning tend to introduce a training bias toward human-like reasoning trajectories. In step-wise preference optimization, in particular, dependence on human or higher-capacity model annotations for intermediate steps limits exploration of alternative, non-human-like reasoning paths and thus constrains achievable performance. Furthermore, through a small-scale pilot study, we observed that in approximately 75% of cases, the model's first erroneous step occurs after the lowest-confidence point. This suggests that guiding the model at its lowest-confidence point before an error provides more accurate supervision than locating the first explicit error. In this paper, we propose Confidence-Guided Reasoning Path Preference Optimization (CGPO), a method that leverages a confidence signal to identify points of maximal uncertainty in the model's reasoning process and applies self-generated, non-human-like reasoning-path guidance to mitigate trajectory drift. Our experiments span diverse models applied to both code and mathematical reasoning tasks. The results show that, with the same amount of training data, our method using data generated by a small model can achieve better performance in most cases compared with approaches using data generated by a strong model or human-annotated.