🤖 AI Summary
Existing GUI agent evaluation benchmarks (e.g., AndroidControl) suffer from ambiguous annotations and factual inaccuracies, leading to systematic underestimation of model capabilities. To address this, we propose AndroidControl-Curated—the first high-quality benchmark jointly refined through human verification and automated curation, systematically rectifying annotation flaws in the original benchmark. We further introduce Magma-R1-3B, a lightweight yet efficient vision-language model with 3B parameters, which achieves substantial gains in instruction following and UI understanding after only 60 hours of fine-tuning on H20 GPUs. Experiments show that state-of-the-art models attain a 15-percentage-point improvement in success rate—reaching 74.8%—on the curated benchmark. Moreover, Magma-R1-3B matches the performance of Qwen3-VL-235B while using only 0.5% of its parameter count. This work establishes a more reliable evaluation standard and a practical, scalable modeling pathway for GUI agents.
📝 Abstract
On-device virtual assistants like Siri and Google Assistant are increasingly pivotal, yet their capabilities are hamstrung by a reliance on rigid, developer-dependent APIs. GUI agents offer a powerful, API-independent alternative, but their adoption is hindered by the perception of poor performance, as even the best models (e.g. Qwen3-VL-235B) scores are capped at around 60% on benchmarks like AndroidControl, far from viability for real-world use. Our research reveals that issue lies not only with the models but with the benchmarks themselves. We identified notable shortcomings in AndroidControl, including ambiguities and factual errors, which systematically underrates agent capabilities. To address this critical oversight, we enhanced AndroidControl into AndroidControl-Curated, a refined version of the benchmark improved through a rigorous purification pipeline. On this enhanced benchmark, state-of-the-art models achieve success rates nearing 75% on complex tasks (15% improvement), reflecting that on-device GUI agents are actually closer to practical deployment than previously thought. We introduce our new SOTA model, Magma-R1- 3B, post-trained on just 2.4k curated samples using 60 hours of an H20 GPU (approximately $60). Despite being 200 times smaller in parameters, this model delivers performance comparable to Qwen3- VL-235B. We release both AndroidControl-Curated benchmark and Magma-R1 model to the research community, encouraging adoption of this enhanced benchmark to better reflect model capabilities and accelerate the development of robust, on-device virtual assistants.