🤖 AI Summary
This work addresses the structural mismatch and distributional shift between offline expert trajectories and online policy execution in end-to-end GUI agents under limited demonstration data. To mitigate these challenges, the authors propose BEPA, a novel two-level expert-to-policy assimilation mechanism: the first level generates structurally aligned, reachable trajectories via a base policy, while the second dynamically caches task-relevant trajectories to provide policy-aligned guidance signals. Integrating reinforcement learning with verifiable rewards (RLVR), vision-language models, and trajectory replay, BEPA significantly improves performance—boosting the success rate of UITARS-1.5-7B from 22.87% to 32.13% on OSWorld-Verified and from 5.74% to 10.30% on the held-out test set, with consistent gains also observed on MMBench-GUI and Online-Mind2Web.
📝 Abstract
Vision-language models are increasingly deployed as computer-use agents (CUAs) that operate desktops and browsers. Top-performing CUAs are framework-based systems that decompose planning and execution, while end-to-end screenshot-to-action policies are easier to deploy but lag behind on benchmarks such as OSWorld-Verified. GUI datasets like OSWorld pose two bottlenecks: they expose only a few hundred interactive, verifiable tasks and environments, and expert trajectories must be gathered by interacting with these environments, making such data hard to scale. We therefore ask how reinforcement learning from verifiable rewards (RLVR) can best exploit a small pool of exist expert trajectories to train end-to-end policies. Naively mixing these off-policy traces into on-policy RLVR is brittle: even after format conversion, expert trajectories exhibit structural mismatch and distribution shift from the learner. We propose BEPA (Bi-Level Expert-to-Policy Assimilation), which turns static expert traces into policy-aligned guidance via self-rolled reachable trajectories under the base policy (LEVEL-1) and a per-task, dynamically updated cache used in RLVR (LEVEL-2). On OSWorld-Verified, BEPA improves UITARS1.5-7B success from 22.87% to 32.13% and raises a held-out split from 5.74% to 10.30%, with consistent gains on MMBench-GUI and Online-Mind2Web. Our code and data are available at: https://github.com/LEON-gittech/Verl_GUI.git