🤖 AI Summary
Large reasoning models (LRMs) suffer from low inference efficiency due to excessively long chains of thought, while existing compression methods—largely predicated on the “overthinking” assumption—often degrade reasoning quality. To address this, we propose TreeCompress, an efficient reasoning compression framework grounded in tree search: it formalizes inference as a cost-weighted search tree and introduces a novel compression paradigm integrating A* heuristic search with bidirectional importance estimation—combining forward confidence and backward influence—to precisely identify critical reasoning nodes and extract high-information-density paths. TreeCompress supports dynamic pruning and budget-aware optimization. Evaluated on multiple mathematical benchmarks, it achieves up to 2.39× speedup (under low computational budgets) and nearly 50% reduction in output tokens (under high budgets) for QwQ-32B, without compromising answer accuracy. Moreover, it generalizes across diverse LRMs, marking the first approach to jointly enhance both efficiency and reasoning fidelity.
📝 Abstract
Large Reasoning Models (LRMs) achieve superior performance by extending the thought length. However, a lengthy thinking trajectory leads to reduced efficiency. Most of the existing methods are stuck in the assumption of overthinking and attempt to reason efficiently by compressing the Chain-of-Thought, but this often leads to performance degradation. To address this problem, we introduce A*-Thought, an efficient tree search-based unified framework designed to identify and isolate the most essential thoughts from the extensive reasoning chains produced by these models. It formulates the reasoning process of LRMs as a search tree, where each node represents a reasoning span in the giant reasoning space. By combining the A* search algorithm with a cost function specific to the reasoning path, it can efficiently compress the chain of thought and determine a reasoning path with high information density and low cost. In addition, we also propose a bidirectional importance estimation mechanism, which further refines this search process and enhances its efficiency beyond uniform sampling. Extensive experiments on several advanced math tasks show that A*-Thought effectively balances performance and efficiency over a huge search space. Specifically, A*-Thought can improve the performance of QwQ-32B by 2.39$ imes$ with low-budget and reduce the length of the output token by nearly 50% with high-budget. The proposed method is also compatible with several other LRMs, demonstrating its generalization capability. The code can be accessed at: https://github.com/AI9Stars/AStar-Thought.