AndroidControl-Curated: Revealing the True Potential of GUI Agents through Benchmark Purification

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GUI agent evaluation benchmarks (e.g., AndroidControl) suffer from ambiguous annotations and factual inaccuracies, leading to systematic underestimation of model capabilities. To address this, we propose AndroidControl-Curated—the first high-quality benchmark jointly refined through human verification and automated curation, systematically rectifying annotation flaws in the original benchmark. We further introduce Magma-R1-3B, a lightweight yet efficient vision-language model with 3B parameters, which achieves substantial gains in instruction following and UI understanding after only 60 hours of fine-tuning on H20 GPUs. Experiments show that state-of-the-art models attain a 15-percentage-point improvement in success rate—reaching 74.8%—on the curated benchmark. Moreover, Magma-R1-3B matches the performance of Qwen3-VL-235B while using only 0.5% of its parameter count. This work establishes a more reliable evaluation standard and a practical, scalable modeling pathway for GUI agents.

Technology Category

Application Category

📝 Abstract
On-device virtual assistants like Siri and Google Assistant are increasingly pivotal, yet their capabilities are hamstrung by a reliance on rigid, developer-dependent APIs. GUI agents offer a powerful, API-independent alternative, but their adoption is hindered by the perception of poor performance, as even the best models (e.g. Qwen3-VL-235B) scores are capped at around 60% on benchmarks like AndroidControl, far from viability for real-world use. Our research reveals that issue lies not only with the models but with the benchmarks themselves. We identified notable shortcomings in AndroidControl, including ambiguities and factual errors, which systematically underrates agent capabilities. To address this critical oversight, we enhanced AndroidControl into AndroidControl-Curated, a refined version of the benchmark improved through a rigorous purification pipeline. On this enhanced benchmark, state-of-the-art models achieve success rates nearing 75% on complex tasks (15% improvement), reflecting that on-device GUI agents are actually closer to practical deployment than previously thought. We introduce our new SOTA model, Magma-R1- 3B, post-trained on just 2.4k curated samples using 60 hours of an H20 GPU (approximately $60). Despite being 200 times smaller in parameters, this model delivers performance comparable to Qwen3- VL-235B. We release both AndroidControl-Curated benchmark and Magma-R1 model to the research community, encouraging adoption of this enhanced benchmark to better reflect model capabilities and accelerate the development of robust, on-device virtual assistants.
Problem

Research questions and friction points this paper is trying to address.

Benchmarks underestimate GUI agent capabilities due to ambiguities and errors
Current GUI agents perform poorly on existing benchmarks hindering real-world adoption
There is a need for improved benchmarks to accurately assess agent performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced benchmark through rigorous purification pipeline
Introduced compact model trained on curated samples
Achieved comparable performance with significantly fewer parameters
H
Ho Fai Leung
BMW ArcherMind Information Technology Co. Ltd. (BA TechWorks)
X
Xiaoyan Xi
BMW ArcherMind Information Technology Co. Ltd. (BA TechWorks)
Fei Zuo
Fei Zuo
University of Central Oklahoma
System and Network SecurityMachine LearningInternet of Things