๐ค AI Summary
Current large language models (LLMs) rely heavily on long chain-of-thought (CoT) reasoning to improve inference performance, yet this paradigm incurs substantial computational overhead and lacks rigorous empirical validation.
Method: The authors challenge this assumption with the โshorter chains are betterโ hypothesis and propose short-m@kโa parallel early-stopping inference framework that dynamically generates m independent reasoning chains and selects the top k fastest-completing ones for majority voting. They further introduce a short-chain-oriented supervised fine-tuning strategy.
Contribution/Results: Experiments provide the first systematic evidence that shorter CoT chains outperform longer ones in both accuracy and efficiency. Compared to baselines, short-1@k reduces reasoning tokens by 40% while maintaining or improving accuracy; short-3@k achieves higher accuracy under full compute budgets and reduces wall-clock latency by up to 33%. These results demonstrate that brevity in reasoning chains can yield superior trade-offs between computational cost and performance.
๐ Abstract
Reasoning large language models (LLMs) heavily rely on scaling test-time compute to perform complex reasoning tasks by generating extensive"thinking"chains. While demonstrating impressive results, this approach incurs significant computational costs and inference time. In this work, we challenge the assumption that long thinking chains results in better reasoning capabilities. We first demonstrate that shorter reasoning chains within individual questions are significantly more likely to yield correct answers - up to 34.5% more accurate than the longest chain sampled for the same question. Based on these results, we suggest short-m@k, a novel reasoning LLM inference method. Our method executes k independent generations in parallel and halts computation once the first m thinking processes are done. The final answer is chosen using majority voting among these m chains. Basic short-1@k demonstrates similar or even superior performance over standard majority voting in low-compute settings - using up to 40% fewer thinking tokens. short-3@k, while slightly less efficient than short-1@k, consistently surpasses majority voting across all compute budgets, while still being substantially faster (up to 33% wall time reduction). Inspired by our results, we finetune an LLM using short, long, and randomly selected reasoning chains. We then observe that training on the shorter ones leads to better performance. Our findings suggest rethinking current methods of test-time compute in reasoning LLMs, emphasizing that longer"thinking"does not necessarily translate to improved performance and can, counter-intuitively, lead to degraded results.