"Are We Done Yet?": A Vision-Based Judge for Autonomous Task Completion of Computer Use Agents

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Computer-using agents (CUAs) frequently exhibit unreliable task completion determination during digital interface interactions. To address this, we propose the first vision-language-driven task completion evaluation and feedback framework tailored for macOS. Our approach introduces the first large-scale, macOS-specific task completion annotation dataset—comprising 1,260 tasks across 42 applications—and designs a multimodal completion判定 mechanism grounded in vision-language models (VLMs). This mechanism jointly processes screen screenshots and natural language task descriptions to enable end-to-end completion state recognition. Furthermore, we integrate a self-correcting feedback loop that enables iterative refinement of agent behavior. Experimental results demonstrate a 73% accuracy in task completion detection; when augmented with our feedback mechanism, overall task success rates improve by an average of 27%. This advancement significantly enhances the autonomy and robustness of CUAs operating on macOS.

Technology Category

Application Category

📝 Abstract
Computer Use Agents (CUAs) are designed to autonomously operate digital interfaces, yet they often fail to reliably determine whether a given task has been completed. We present an autonomous evaluation and feedback framework that uses vision-language models to assess task completion directly from screenshots and task descriptions. Our dataset covers 42 built-in macOS applications and 1,260 human-labeled tasks across a wide range of scenarios. Our framework achieves up to 73 percent accuracy in task success detection and yields an average relative improvement of 27 percent in overall task success when evaluator feedback is applied. These results show that vision-based evaluation can serve as an effective feedback mechanism that improves the reliability and self-correction of autonomous computer-use agents.
Problem

Research questions and friction points this paper is trying to address.

Autonomous agents struggle to reliably determine computer task completion status
Vision-language models assess task completion from screenshots and descriptions
Framework improves agent reliability through visual feedback and self-correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language models assess task completion from screenshots
Framework evaluates 42 macOS applications across diverse scenarios
Achieves 73% accuracy in autonomous task success detection
🔎 Similar Papers