Entropy After $langle exttt{/Think} angle$ for reasoning model early exiting

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models often suffer from “overthinking”—continuing chain-of-thought (CoT) generation after reaching the correct answer—leading to unnecessary computational overhead. To address this, we propose EAT (Entropy-based Adaptive Termination), a black-box early-exit mechanism that leverages prediction entropy stability following explicit stop tokens (e.g., `</think>`) as a termination signal. EAT dynamically detects CoT convergence by monitoring both the entropy of subsequent token predictions and the variance of their exponential moving average—without requiring access to internal logits or model-specific modifications. Crucially, EAT is architecture-agnostic and requires no fine-tuning or training intervention. Evaluated on MATH500 and AIME2025 benchmarks, EAT reduces token consumption by 13%–21% while preserving original accuracy, thereby significantly improving inference efficiency.

Technology Category

Application Category

📝 Abstract
Large reasoning models show improved performance with longer chains of thought. However, recent work has highlighted (qualitatively) their tendency to overthink, continuing to revise answers even after reaching the correct solution. We quantitatively confirm this inefficiency by tracking Pass@1 for answers averaged over a large number of rollouts and find that the model often begins to always produce the correct answer early in the reasoning, making extra reasoning a waste of tokens. To detect and prevent overthinking, we propose a simple and inexpensive novel signal -- Entropy After </Think> (EAT) -- for monitoring and deciding whether to exit reasoning early. By appending a stop thinking token (</think>) and monitoring the entropy of the following token as the model reasons, we obtain a trajectory that decreases and stabilizes when Pass@1 plateaus; thresholding its variance under an exponential moving average yields a practical stopping rule. Importantly, our approach enables adaptively allocating compute based on the EAT trajectory, allowing us to spend compute in a more efficient way compared with fixing the token budget for all questions. Empirically, on MATH500 and AIME2025, EAT reduces token usage by 13 - 21% without harming accuracy, and it remains effective in black box settings where logits from the reasoning model are not accessible, and EAT is computed with proxy models.
Problem

Research questions and friction points this paper is trying to address.

Quantitatively confirming reasoning models' tendency to overthink after correct solutions
Proposing entropy monitoring to detect and prevent inefficient extra reasoning
Reducing token usage by adaptively allocating compute based on reasoning stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses entropy after stop token for early exiting
Monitors token entropy variance with moving average
Adaptively allocates compute via entropy trajectory thresholding
🔎 Similar Papers
No similar papers found.