🤖 AI Summary
This work addresses the limited ability of existing GUI agents to recover from early errors due to their inability to backtrack or reuse partial action sequences. To overcome this, the authors propose a unified planning framework based on step-level Monte Carlo Tree Search (MCTS), which actively models the planning space through alpha-UCT-guided exploration, a comparison-driven evaluation mechanism, and a diversity-constrained expansion strategy. This approach enables efficient prefix reuse and early pruning of unpromising trajectories. Evaluated on the OSWorld benchmark, the method achieves a success rate of approximately 77%, substantially outperforming trajectory-level baselines under comparable computational budgets, thereby demonstrating its effectiveness and novelty in handling complex GUI tasks.
📝 Abstract
While scaling test-time compute through trajectory-level sampling has significantly improved Graphical User Interface (GUI) agents, the lack of regressive ability prevents the reuse of partial successes and the recovery from early missteps. In this paper, we introduce Agent Alpha, a unified framework that synergizes generation, exploration, and evaluation through step-level Monte Carlo Tree Search (MCTS). It enables active modeling or exploiting structures of the planning space. By integrating alpha-UCT guided search into the interaction loop, Agent Alpha enables deliberate planning, facilitating early pruning of suboptimal branches and efficient prefix reuse. We also employ comparison-driven evaluation to mitigate absolute scoring biases and diversity-constrained expansion to maintain a compact, informative search space. Regret bound of alpha-UCT is analyzed. On the OSWorld benchmark, Agent Alpha achieves a state-of-the-art success rate of $\sim 77\%$, significantly outperforming trajectory-level baselines under equivalent compute.