🤖 AI Summary
In information retrieval, large language models (LLMs) employing chain-of-thought (CoT) prompting often suffer from “over-reasoning”—generating excessively long, semantically redundant reasoning paths that incur substantial computational overhead with minimal gains. To address this, we propose a state-machine-based discrete-action reasoning framework—Refine/Rerank/Stop—that explicitly models reasoning as a controlled state-transition process, enabling fine-grained trajectory control and dynamic early stopping for the first time. Our approach requires no task-specific fine-tuning and achieves zero-shot generalization across diverse LLMs and retrievers. Evaluated on the BEIR and BRIGHT benchmarks, it improves nDCG@10 by 3.4% while reducing token consumption by 74.4%, effectively mitigating both redundant reasoning trajectories and intent drift—the two core challenges in LLM-based retrieval.
📝 Abstract
Chain-of-Thought (CoT) prompting enables complex reasoning in large language models (LLMs), including applications in information retrieval (IR). However, it often leads to overthinking, where models produce excessively long and semantically redundant traces with little or no benefit. We identify two key challenges in IR: redundant trajectories that revisit similar states and misguided reasoning that diverges from user intent. To address these, we propose State Machine Reasoning (SMR), a transition-based reasoning framework composed of discrete actions (Refine, Rerank, Stop) that support early stopping and fine-grained control. Experiments on the BEIR and BRIGHT benchmarks show that SMR improves retrieval performance (nDCG@10) by 3.4% while reducing token usage by 74.4%. It generalizes across LLMs and retrievers without requiring task-specific tuning, offering a practical alternative to conventional CoT reasoning. The code and details are available at https://github.com/ldilab/SMR.