๐ค AI Summary
Current multimodal large language models (MLLMs) for GUI automation rely heavily on high-quality offline demonstration trajectories and lack introspective reasoning and error recovery capabilities. To address this, we propose a three-stage training framework: (1) GUI-specific pretraining, (2) offline supervised fine-tuning, and (3) online reflective tuningโenabling fully automated generation and learning from reflective data. We introduce the first benchmark for evaluating GUI introspection tasks; design an iterative online reflective tuning algorithm; and build the first mobile-oriented GUI automation environment supporting online model training. Our approach integrates MLLMs with trajectory augmentation, reflection-driven reinforcement learning, mobile GUI simulation, and self-supervised reflective data synthesis. It significantly improves robustness in error recovery, step correction, and long-horizon task execution. All datasets, models, training environments, and tooling are publicly released.
๐ Abstract
Multimodal Large Language Models (MLLMs) have shown great potential in revolutionizing Graphical User Interface (GUI) automation. However, existing GUI models mostly rely on learning from nearly error-free offline trajectories, thus lacking reflection and error recovery capabilities. To bridge this gap, we propose GUI-Reflection, a novel framework that explicitly integrates self-reflection and error correction capabilities into end-to-end multimodal GUI models throughout dedicated training stages: GUI-specific pre-training, offline supervised fine-tuning (SFT), and online reflection tuning. GUI-reflection enables self-reflection behavior emergence with fully automated data generation and learning processes without requiring any human annotation. Specifically, 1) we first propose scalable data pipelines to automatically construct reflection and error correction data from existing successful trajectories. While existing GUI models mainly focus on grounding and UI understanding ability, we propose the GUI-Reflection Task Suite to learn and evaluate reflection-oriented abilities explicitly. 2) Furthermore, we built a diverse and efficient environment for online training and data collection of GUI models on mobile devices. 3) We also present an iterative online reflection tuning algorithm leveraging the proposed environment, enabling the model to continuously enhance its reflection and error correction abilities. Our framework equips GUI agents with self-reflection and correction capabilities, paving the way for more robust, adaptable, and intelligent GUI automation, with all data, models, environments, and tools to be released publicly.