🤖 AI Summary
This work addresses the computational inefficiency of existing Monte Carlo Tree Search (MCTS)-based reasoning methods, which treat each rollout as an isolated trajectory and lack cross-trajectory information sharing. To overcome this limitation, the authors propose PRISM-MCTS, a novel framework that introduces a dynamic shared memory mechanism—recording both heuristics and fallacies—and integrates a process reward model (PRM) to emulate human-like parallel thinking and metacognitive reflection. This enables collaborative optimization and dynamic pruning of reasoning paths. Combined with a few-shot efficient training strategy, PRISM-MCTS substantially outperforms MCTS-RAG and Search-o1 on reasoning benchmarks such as GPQA, achieving superior performance with only half the number of reasoning trajectories, thereby demonstrating its efficiency and effectiveness.
📝 Abstract
PRISM-MCTS: Learning from Reasoning Trajectories with Metacognitive Reflection Siyuan Cheng, Bozhong Tian, Yanchao Hao, Zheng Wei Published: 06 Apr 2026, Last Modified: 06 Apr 2026 ACL 2026 Findings Conference, Area Chairs, Reviewers, Publication Chairs, Authors Revisions BibTeX CC BY 4.0 Keywords: Efficient/Low-Resource Methods for NLP, Generation, Question Answering Abstract: The emergence of reasoning models, exemplified by OpenAI o1, signifies a transition from intuitive to deliberative cognition, effectively reorienting the scaling laws from pre-training paradigms toward test-time computation. While Monte Carlo Tree Search (MCTS) has shown promise in this domain, existing approaches typically treat each rollout as an isolated trajectory. This lack of information sharing leads to severe inefficiency and substantial computational redundancy, as the search process fails to leverage insights from prior explorations. To address these limitations, we propose PRISM-MCTS, a novel reasoning framework that draws inspiration from human parallel thinking and reflective processes. PRISM-MCTS integrates a Process Reward Model (PRM) with a dynamic shared memory, capturing both "Heuristics" and "Fallacies". By reinforcing successful strategies and pruning error-prone branches, PRISM-MCTS effectively achieves refinement. Furthermore, we develop a data-efficient training strategy for the PRM, achieving high-fidelity evaluation under a few-shot regime. Empirical evaluations across diverse reasoning benchmarks substantiate the efficacy of PRISM-MCTS. Notably, it halves the trajectory requirements on GPQA while surpassing MCTS-RAG and Search-o1, demonstrating that it scales inference by reasoning judiciously rather than exhaustively.