🤖 AI Summary
This work proposes a node-adaptive routing framework that addresses the trade-off between computational cost and performance in existing graph-based reasoning methods (e.g., GoT, AGoT), which often suffer from high computational overhead and unstable efficacy. The framework uniquely integrates node-level heterogeneity modeling with an explicit global budget constraint by dynamically allocating either lightweight or strong language models based on predicted task difficulty—favoring strong models during planning and synthesis stages while employing lightweight models for intermediate subtasks. It introduces a graph-structure-aware dynamic routing mechanism coupled with a global scheduler under a strict token budget. Evaluated across multiple reasoning and question-answering benchmarks, the method achieves an average accuracy gain of 8.1 percentage points while reducing output tokens by 79.1%, substantially outperforming current approaches.
📝 Abstract
Large Language Models (LLMs) excel at multi-step reasoning, yet increasing the structural complexity of inference does not consistently improve system-level returns. Methods such as Tree of Thoughts (ToT), Graph of Thoughts (GoT), and Adaptive Graph of Thoughts (AGoT) can boost accuracy on some benchmarks, but often introduce substantial overhead in token consumption and latency, and their gains can be unstable across task distributions-sometimes underperforming simpler Chain-of-Thought (CoT) or direct input-output prompting (IO). We attribute this inefficiency to stage-wise and node-wise heterogeneity inside GoT-style reasoning pipelines: high-quality planning and final synthesis are globally coupled and typically benefit from strong models, whereas many intermediate subtasks are localized and can be solved accurately by lighter models with far fewer tokens. Motivated by these observations, we propose RouteGoT, a budget-controllable, node-adaptive routing framework for graph-structured reasoning. RouteGoT performs in-graph routing by prioritizing strong models for planning and synthesis, while dynamically allocating lightweight models and cost-effective strategies to leaf subtasks based on predicted difficulty. It further integrates explicit budget constraints into a global inference scheduler to control graph expansion under a user-specified token budget, enabling predictable performance-cost trade-offs. Experiments across reasoning, retrieval, and multi-hop QA benchmarks show that RouteGoT matching or improving accuracy while substantially reducing token usage; specifically, it achieves an average 8.1 percentage points accuracy improvement and 79.1\% output token reduction compared to AGoT. Furthermore, RouteGoT outperforms existing routing baselines by maintaining a superior cost-accuracy trade-off, demonstrating improved robustness under varying budget targets and tasks.