Agentic Reward Modeling: Verifying GUI Agent via Online Proactive Interaction

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for GUI agents are limited by poor scalability, inadequate support for open-ended tasks, and insufficient state observability, making it difficult to provide reliable reward signals. This work proposes an active interactive verification paradigm that, for the first time, shifts the verification process from passive observation to active interaction, leveraging the inherent property of GUI tasks—being easy to verify but hard to solve—to overcome limitations imposed by visual observations alone. We introduce VAGEN, a reinforcement learning–based verification agent framework that integrates task planning, environmental interaction, and evidence collection mechanisms, along with a test-time expansion strategy. Evaluated on the OSWorld-Verified and AndroidWorld benchmarks, our approach substantially outperforms LLM-as-a-Judge, achieving significantly higher verification accuracy.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) is pivotal for the continuous evolution of GUI agents, yet existing evaluation paradigms face significant limitations. Rule-based methods suffer from poor scalability and cannot handle open-ended tasks, while LLM-as-a-Judge approaches rely on passive visual observation, often failing to capture latent system states due to partial state observability. To address these challenges, we advocate for a paradigm shift from passive evaluation to Agentic Interactive Verification. We introduce VAGEN, a framework that employs a verifier agent equipped with interaction tools to autonomously plan verification strategies and proactively probe the environment for evidence of task completion. Leveraging the insight that GUI tasks are typically"easy to verify but hard to solve", VAGEN overcomes the bottlenecks of visual limitations. Experimental results on OSWorld-Verified and AndroidWorld benchmarks demonstrate that VAGEN significantly improves evaluation accuracy compared to LLM-as-a-Judge baselines and further enhances performance through test-time scaling strategies.
Problem

Research questions and friction points this paper is trying to address.

GUI agent
reward verification
partial observability
evaluation paradigm
open-ended tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Reward Modeling
Interactive Verification
GUI Agent
Reinforcement Learning with Verifiable Rewards
Test-time Scaling
🔎 Similar Papers
No similar papers found.