Enhancing LLM Reasoning via Non-Human-Like Reasoning Path Preference Optimization

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for enhancing LLM reasoning over-rely on human- or strong-model-provided intermediate step annotations, leading to homogeneous reasoning paths and suppressing non-human-like diverse reasoning. To address this, we propose Confidence-Guided Progressive Preference Optimization (CGPO): the first method to dynamically identify the most uncertain node in a reasoning chain using the model’s own confidence estimates, and inject self-generated, non-human-like reasoning guidance precisely at that node—thereby avoiding path deviation. CGPO integrates confidence-aware evaluation, self-constructed reasoning paths, and progressive preference optimization, eliminating the need for human or strong-model annotations. Experiments on code and mathematical reasoning tasks demonstrate that, under identical training budgets, CGPO trained solely on data generated by small models surpasses both strong-model- and human-annotated baselines, achieving significant gains in reasoning accuracy and out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Current approaches for strengthening LLM reasoning tend to introduce a training bias toward human-like reasoning trajectories. In step-wise preference optimization, in particular, dependence on human or higher-capacity model annotations for intermediate steps limits exploration of alternative, non-human-like reasoning paths and thus constrains achievable performance. Furthermore, through a small-scale pilot study, we observed that in approximately 75% of cases, the model's first erroneous step occurs after the lowest-confidence point. This suggests that guiding the model at its lowest-confidence point before an error provides more accurate supervision than locating the first explicit error. In this paper, we propose Confidence-Guided Reasoning Path Preference Optimization (CGPO), a method that leverages a confidence signal to identify points of maximal uncertainty in the model's reasoning process and applies self-generated, non-human-like reasoning-path guidance to mitigate trajectory drift. Our experiments span diverse models applied to both code and mathematical reasoning tasks. The results show that, with the same amount of training data, our method using data generated by a small model can achieve better performance in most cases compared with approaches using data generated by a strong model or human-annotated.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM reasoning using non-human-like reasoning paths
Identifying model uncertainty points for targeted guidance
Improving reasoning performance with self-generated training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses confidence signals to identify uncertainty points
Applies self-generated non-human-like reasoning guidance
Achieves better performance with small model data
🔎 Similar Papers
No similar papers found.
Junjie Lu
Junjie Lu
Cystic Fibrosis Foundation
biomedical researchstem celldisease modelinggenome structure and function
Y
Yuliang Liu
Shanghai Innovation Institute
C
Chaofeng Qu
Southeast University
W
Wei Shen
Independent Researcher
Z
Zhouhan Lin
Shanghai Jiao Tong University
M
Min Xu
University of Technology Sydney