🤖 AI Summary
Computer-using agents (CUAs) frequently exhibit unreliable task completion determination during digital interface interactions. To address this, we propose the first vision-language-driven task completion evaluation and feedback framework tailored for macOS. Our approach introduces the first large-scale, macOS-specific task completion annotation dataset—comprising 1,260 tasks across 42 applications—and designs a multimodal completion判定 mechanism grounded in vision-language models (VLMs). This mechanism jointly processes screen screenshots and natural language task descriptions to enable end-to-end completion state recognition. Furthermore, we integrate a self-correcting feedback loop that enables iterative refinement of agent behavior. Experimental results demonstrate a 73% accuracy in task completion detection; when augmented with our feedback mechanism, overall task success rates improve by an average of 27%. This advancement significantly enhances the autonomy and robustness of CUAs operating on macOS.
📝 Abstract
Computer Use Agents (CUAs) are designed to autonomously operate digital interfaces, yet they often fail to reliably determine whether a given task has been completed. We present an autonomous evaluation and feedback framework that uses vision-language models to assess task completion directly from screenshots and task descriptions. Our dataset covers 42 built-in macOS applications and 1,260 human-labeled tasks across a wide range of scenarios. Our framework achieves up to 73 percent accuracy in task success detection and yields an average relative improvement of 27 percent in overall task success when evaluator feedback is applied. These results show that vision-based evaluation can serve as an effective feedback mechanism that improves the reliability and self-correction of autonomous computer-use agents.