π€ AI Summary
Weak generalization capability of GUI agents, scarcity of cross-platform trajectory data, and high cost of manual annotation hinder progress in GUI automation. To address these challenges, this paper proposes the first automatic GUI interaction trajectory mining and construction paradigm grounded in real-world multimodal web tutorials (video + text-image). Our approach integrates multimodal crawling, fine-tuning of a vision-language model (Qwen2.5-VL), GUI element localization and action modeling, and cross-platform trajectory standardization. This yields GUI-Netβa large-scale, open-source dataset comprising 143K trajectories spanning five operating systems and 200+ applications. Leveraging GUI-Net, we develop TongUI: an end-to-end GUI agent training framework requiring zero human annotation. Evaluated on mainstream GUI grounding and navigation benchmarks, TongUI achieves ~10% average performance gain over prior methods. The GUI-Net dataset, source code, and trained models will be fully open-sourced.
π Abstract
Building Graphical User Interface (GUI) agents is a promising research direction, which simulates human interaction with computers or mobile phones to perform diverse GUI tasks. However, a major challenge in developing generalized GUI agents is the lack of sufficient trajectory data across various operating systems and applications, mainly due to the high cost of manual annotations. In this paper, we propose the TongUI framework that builds generalized GUI agents by learning from rich multimodal web tutorials. Concretely, we crawl and process online GUI tutorials (such as videos and articles) into GUI agent trajectory data, through which we produce the GUI-Net dataset containing 143K trajectory data across five operating systems and more than 200 applications. We develop the TongUI agent by fine-tuning Qwen2.5-VL-3B/7B models on GUI-Net, which show remarkable performance improvements on commonly used grounding and navigation benchmarks, outperforming baseline agents about 10% on multiple benchmarks, showing the effectiveness of the GUI-Net dataset and underscoring the significance of our TongUI framework. We will fully open-source the code, the GUI-Net dataset, and the trained models soon.