π€ AI Summary
To address the issues of verbosity, redundancy, and hallucination in chain-of-thought (CoT) generation by large language models (LLMs) on complex reasoning tasks, this paper proposes Prompt Intervention (PI), a test-time prompting framework. PI introduces three dynamically activated modulesβWhen (timing judgment for intervention), How (design of intervention mechanisms), and Which (optimization of token sampling strategies)βto enable fine-grained, stepwise regulation of intermediate reasoning steps during generation, thereby mitigating the lack of process-level reward supervision in post-training. Grounded in human problem-solving heuristics and cognitive science principles, PI enhances both reasoning conciseness and reliability. Experiments across multiple LLMs and datasets demonstrate that PI reduces average CoT length by 23.6%, decreases hallucination rates by 18.4%, and improves reasoning accuracy by 2.1β5.7 percentage points. The framework exhibits strong generalizability and inherent interpretability.
π Abstract
Test-time compute has led to remarkable success in the large language model (LLM) community, particularly for complex tasks, where longer chains of thought (CoTs) are generated to enhance reasoning capabilities. However, growing evidence reveals that such reasoning models often produce CoTs plagued by excessive redundancy, including unnecessary verification steps and repetitive reasoning shifts. The root cause lies in post-training of them that overly rely on outcome reward paradigms, as the data of process reward paradigms, which regulate intermediate reasoning steps, is difficult to construct at scale. To address this, we propose PI, a novel framework for Test-time Prompt Intervention. PI provides an interface to dynamically guide and regulate reasoning paths during inference through timely (When module) and proper (How module) interventions and post-intervention sampling (Which module). This allows human problem-solving expertise and cognitive science principles to be seamlessly integrated into LLMs' reasoning processes, enhancing controllability and interpretability. Extensive experiments across multiple models and datasets demonstrate that PI significantly shortens CoTs while reducing hallucination, yielding more concise and reliable reasoning.