🤖 AI Summary
To address the high computational overhead and low training efficiency of Monte Carlo Tree Search (MCTS)-based algorithms (e.g., MuZero), this paper proposes two core innovations: (1) a backward-looking reanalysis mechanism that leverages value estimates from child nodes to prune redundant subtree expansions, thereby reducing per-iteration MCTS latency; and (2) a full-buffer periodic batched reanalysis strategy—replacing frequent small-batch updates—to jointly optimize compute resource utilization and data throughput. The method integrates multi-armed bandit–inspired backward value reuse, batched scheduling, and an end-to-end MCTS–deep reinforcement learning co-training framework. Evaluated on Atari, DeepMind Control Suite, and board-game benchmarks, our approach achieves up to 3.2× faster training convergence while matching or exceeding baseline methods in both sample efficiency and final policy performance.
📝 Abstract
Monte Carlo Tree Search (MCTS)-based algorithms, such as MuZero and its derivatives, have achieved widespread success in various decision-making domains. These algorithms employ the reanalyze process to enhance sample efficiency from stale data, albeit at the expense of significant wall-clock time consumption. To address this issue, we propose a general approach named ReZero to boost tree search operations for MCTS-based algorithms. Specifically, drawing inspiration from the one-armed bandit model, we reanalyze training samples through a backward-view reuse technique which uses the value estimation of a certain child node to save the corresponding sub-tree search time. To further adapt to this design, we periodically reanalyze the entire buffer instead of frequently reanalyzing the mini-batch. The synergy of these two designs can significantly reduce the search cost and meanwhile guarantee or even improve performance, simplifying both data collecting and reanalyzing. Experiments conducted on Atari environments, DMControl suites and board games demonstrate that ReZero substantially improves training speed while maintaining high sample efficiency. The code is available as part of the LightZero MCTS benchmark at https://github.com/opendilab/LightZero.