QFFT, Question-Free Fine-Tuning for Adaptive Reasoning

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-chain-of-thought (Long CoT) models improve performance on complex tasks but suffer from redundant reasoning—i.e., “overthinking”—on simple problems, leading to inefficiency. This work proposes a problem-agnostic adaptive reasoning framework that decouples reasoning-path selection from problem understanding, enabling the model to dynamically choose between short- and long-chain inference paths without requiring the input question. Key innovations include: (1) supervised fine-tuning guided by long-chain responses; (2) a question-free input design; and (3) a conditionally triggered adaptive reasoning-path activation mechanism. Experiments on mathematical reasoning benchmarks demonstrate over 50% reduction in response length while matching standard supervised fine-tuning in accuracy. Moreover, the method significantly outperforms baselines under noisy conditions, cross-domain generalization, and low-resource settings—highlighting its robustness and practicality.

Technology Category

Application Category

📝 Abstract
Recent advancements in Long Chain-of-Thought (CoT) reasoning models have improved performance on complex tasks, but they suffer from overthinking, which generates redundant reasoning steps, especially for simple questions. This paper revisits the reasoning patterns of Long and Short CoT models, observing that the Short CoT patterns offer concise reasoning efficiently, while the Long CoT patterns excel in challenging scenarios where the Short CoT patterns struggle. To enable models to leverage both patterns, we propose Question-Free Fine-Tuning (QFFT), a fine-tuning approach that removes the input question during training and learns exclusively from Long CoT responses. This approach enables the model to adaptively employ both reasoning patterns: it prioritizes the Short CoT patterns and activates the Long CoT patterns only when necessary. Experiments on various mathematical datasets demonstrate that QFFT reduces average response length by more than 50%, while achieving performance comparable to Supervised Fine-Tuning (SFT). Additionally, QFFT exhibits superior performance compared to SFT in noisy, out-of-domain, and low-resource scenarios.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant reasoning steps in CoT models
Balances Short and Long CoT patterns adaptively
Improves performance in noisy and low-resource scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Question-Free Fine-Tuning (QFFT) approach
Removes input question during training
Learns exclusively from Long CoT responses
🔎 Similar Papers
No similar papers found.