Phi-Ground Tech Report: Advancing Perception in GUI Grounding

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current end-to-end GUI foundation models achieve sub-65% accuracy on key benchmarks such as ScreenSpot-pro and UI-Vision, severely limiting the practical operational capability of computer-use agents (CUAs). To address this, we introduce the Phi-Ground model family—the first <10B-parameter models to attain state-of-the-art performance across five major GUI localization benchmarks. Our approach integrates a multimodal reasoning architecture with fine-grained GUI data engineering, end-to-end training strategies, and a novel perception-coordinate alignment mechanism, significantly enhancing semantic understanding of interface elements and pixel-level localization precision. Phi-Ground achieves 43.2 and 27.2 mAP on ScreenSpot-pro and UI-Vision, respectively—substantially outperforming prior methods. Systematic ablations confirm that meticulous training design and tight data-model co-optimization are critical for advancing GUI perception. This work establishes an efficient, scalable technical pathway toward practical CUA deployment.

Technology Category

Application Category

📝 Abstract
With the development of multimodal reasoning models, Computer Use Agents (CUAs), akin to Jarvis from extit{"Iron Man"}, are becoming a reality. GUI grounding is a core component for CUAs to execute actual actions, similar to mechanical control in robotics, and it directly leads to the success or failure of the system. It determines actions such as clicking and typing, as well as related parameters like the coordinates for clicks. Current end-to-end grounding models still achieve less than 65% accuracy on challenging benchmarks like ScreenSpot-pro and UI-Vision, indicating they are far from being ready for deployment. % , as a single misclick can result in unacceptable consequences. In this work, we conduct an empirical study on the training of grounding models, examining details from data collection to model training. Ultimately, we developed the extbf{Phi-Ground} model family, which achieves state-of-the-art performance across all five grounding benchmarks for models under $10B$ parameters in agent settings. In the end-to-end model setting, our model still achieves SOTA results with scores of extit{ extbf{43.2}} on ScreenSpot-pro and extit{ extbf{27.2}} on UI-Vision. We believe that the various details discussed in this paper, along with our successes and failures, not only clarify the construction of grounding models but also benefit other perception tasks. Project homepage: href{https://zhangmiaosen2000.github.io/Phi-Ground/}{https://zhangmiaosen2000.github.io/Phi-Ground/}
Problem

Research questions and friction points this paper is trying to address.

Improving GUI grounding accuracy for Computer Use Agents
Addressing low accuracy in end-to-end grounding models
Developing Phi-Ground model for better perception benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed Phi-Ground model family
Achieved SOTA in GUI grounding benchmarks
Empirical study on grounding model training
🔎 Similar Papers
No similar papers found.
M
Miaosen Zhang
Microsoft
Z
Ziqiang Xu
Microsoft
Jialiang Zhu
Jialiang Zhu
southeast university
Q
Qi Dai
Microsoft
K
Kai Qiu
Microsoft
Y
Yifan Yang
Microsoft
Chong Luo
Chong Luo
Microsoft Research
multimedia communicationscomputer vision
T
Tianyi Chen
Microsoft
J
Justin Wagle
Microsoft
T
Tim Franklin
Microsoft
Baining Guo
Baining Guo
Distinguished Scientist, Microsoft Research
Computer GraphicsGraphicsVirtual RealityGeometric Modeling