Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) frequently deviate into erroneous reasoning paths during complex reasoning tasks, as mainstream approaches—such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT)—rely solely on single-pass forward inference and lack mechanisms for backtracking or error correction. To address this, we propose Forest-of-Thought (FoT), a novel framework that concurrently constructs multiple reasoning trees. FoT introduces a sparse path activation mechanism to reduce computational overhead, a dynamic self-correction module that detects and rectifies erroneous nodes in real time during inference, and a consensus-guided aggregation strategy to enhance decision robustness. FoT is the first method to adopt a “multi-tree collaboration” architecture, enabling joint optimization of accuracy and test-time compute efficiency. Extensive evaluation on logical and mathematical reasoning benchmarks demonstrates that FoT significantly outperforms CoT and ToT, achieving superior accuracy, robustness, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable abilities across various language tasks, but solving complex reasoning problems remains a significant challenge. While existing methods, such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT), enhance reasoning by decomposing problems or structuring prompts, they typically perform a single pass of reasoning and may fail to revisit flawed paths, compromising accuracy. To address this limitation, we propose a novel reasoning framework called Forest-of-Thought (FoT), which integrates multiple reasoning trees to leverage collective decision-making for solving complex logical problems. FoT employs sparse activation strategies to select the most relevant reasoning paths, improving both efficiency and accuracy. Additionally, we introduce a dynamic self-correction strategy that enables real-time error correction, along with consensus-guided decision-making strategies to optimize both correctness and computational resources. Experimental results demonstrate that the FoT framework, combined with these strategies, significantly enhances the reasoning capabilities of LLMs, enabling them to solve complex tasks with greater precision and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM reasoning with multiple trees
Introduces dynamic self-correction for errors
Optimizes computational resources with consensus strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiple reasoning trees integration
Sparse activation strategies
Dynamic self-correction strategy
🔎 Similar Papers
No similar papers found.