Steering LLM Thinking with Budget Guidance

šŸ“… 2025-06-16
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
To address the challenge of balancing performance and efficiency in large language models (LLMs) under stringent reasoning budget constraints, this paper proposes a zero-shot, fine-tuning-free budget-guidance mechanism. The method models remaining thinking length token-wise using a Gamma distribution—enabling soft, adaptive termination—and integrates a lightweight predictor for difficulty-aware, dynamic budget allocation. Evaluated under tight budget constraints on the MATH-500 dataset, our approach achieves a 26% absolute accuracy improvement over baseline methods; remarkably, it attains full inference model performance using only 63% of the reasoning tokens. Moreover, the framework naturally exhibits emergent capabilities—including input difficulty estimation—without explicit supervision or architectural modification. This work advances efficient LLM reasoning by unifying probabilistic stopping criteria with input-adaptive resource allocation, all while preserving zero-shot applicability.

Technology Category

Application Category

šŸ“ Abstract
Recent deep-thinking large language models often reason extensively to improve performance, but such lengthy reasoning is not always desirable, as it incurs excessive inference costs with disproportionate performance gains. Controlling reasoning length without sacrificing performance is therefore important, but remains challenging, especially under tight thinking budgets. We propose budget guidance, a simple yet effective method for steering the reasoning process of LLMs toward a target budget without requiring any LLM fine-tuning. Our approach introduces a lightweight predictor that models a Gamma distribution over the remaining thinking length during next-token generation. This signal is then used to guide generation in a soft, token-level manner, ensuring that the overall reasoning trace adheres to the specified thinking budget. Budget guidance enables natural control of the thinking length, along with significant token efficiency improvements over baseline methods on challenging math benchmarks. For instance, it achieves up to a 26% accuracy gain on the MATH-500 benchmark under tight budgets compared to baseline methods, while maintaining competitive accuracy with only 63% of the thinking tokens used by the full-thinking model. Budget guidance also generalizes to broader task domains and exhibits emergent capabilities, such as estimating question difficulty. The source code is available at: https://github.com/UMass-Embodied-AGI/BudgetGuidance.
Problem

Research questions and friction points this paper is trying to address.

Control reasoning length without performance loss
Steer LLM thinking under tight budget constraints
Improve token efficiency in math benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight predictor models Gamma distribution
Token-level guidance controls reasoning length
Achieves accuracy with fewer thinking tokens
šŸ”Ž Similar Papers