VeriThinker: Learning to Verify Makes Reasoning Model Efficient

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) often generate redundant reasoning steps in chain-of-thought (CoT) inference, leading to high computational cost and low efficiency. To address this, we propose a verification-driven reasoning compression paradigm: through auxiliary verification-task supervision alone—without requiring synthetic concise-CoT data—we guide models to autonomously suppress unnecessary self-reflection and enable zero-shot transfer to speculative reasoning. Our method integrates chain-of-verification supervised fine-tuning, dynamic inference termination, and zero-shot generalization mechanisms. Experiments demonstrate significant improvements: on MATH500, inference tokens decrease by 44% while accuracy increases by 0.8%; on AIME25, tokens decrease by 28% and accuracy rises by 2.1%. These results confirm substantial gains in both inference efficiency and accuracy. To our knowledge, this is the first work to achieve lightweight, general-purpose CoT compression driven solely by verification capability.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) excel at complex tasks using Chain-of-Thought (CoT) reasoning. However, their tendency to overthinking leads to unnecessarily lengthy reasoning chains, dramatically increasing inference costs. To mitigate this issue, we introduce VeriThinker, a novel approach for CoT compression. Unlike conventional methods that fine-tune LRMs directly on the original reasoning task using synthetic concise CoT data, we innovatively fine-tune the model solely through an auxiliary verification task. By training LRMs to accurately verify the correctness of CoT solutions, the LRMs inherently become more discerning about the necessity of subsequent self-reflection steps, thereby effectively suppressing overthinking. Extensive experiments validate that VeriThinker substantially reduces reasoning chain lengths while maintaining or even slightly improving accuracy. When applied to DeepSeek-R1-Distill-Qwen-7B, our approach reduces reasoning tokens on MATH500 from 3790 to 2125 while improving accuracy by 0.8% (94.0% to 94.8%), and on AIME25, tokens decrease from 14321 to 10287 with a 2.1% accuracy gain (38.7% to 40.8%). Additionally, our experiments demonstrate that VeriThinker can also be zero-shot generalized to speculative reasoning. Code is available at https://github.com/czg1225/VeriThinker
Problem

Research questions and friction points this paper is trying to address.

Reduces lengthy reasoning chains in Large Reasoning Models
Improves efficiency via auxiliary verification task training
Maintains accuracy while cutting inference costs significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes LRMs via auxiliary verification task
Reduces reasoning chain lengths effectively
Improves accuracy while cutting inference costs
🔎 Similar Papers
No similar papers found.