🤖 AI Summary
Existing GUI-based multimodal agent benchmarks suffer from limited environmental diversity (e.g., single-device settings), coarse-grained evaluation, and labor-intensive construction. To address these limitations, we introduce Crab—the first cross-device (desktop/mobile) GUI multimodal agent benchmark—featuring 120 fine-grained, human-annotated tasks and a unified, environment-agnostic evaluation framework. Methodologically, we propose a structured, operation-graph-based automatic evaluation approach that jointly models action semantics and execution paths using graph neural networks. Furthermore, we design a scalable pipeline for automated task and evaluator generation. Experimental results show that GPT-4o achieves a 38.01% task completion rate on Crab-v0 under the single-agent setting. All code, task suites, and evaluation tools are publicly released, establishing a standardized, open infrastructure for benchmarking multimodal GUI agents.
📝 Abstract
The development of autonomous agents increasingly relies on Multimodal Language Models (MLMs) to perform tasks described in natural language with GUI environments, such as websites, desktop computers, or mobile phones. Existing benchmarks for MLM agents in interactive environments are limited by their focus on a single environment, lack of detailed and generalized evaluation methods, and the complexities of constructing tasks and evaluators. To overcome these limitations, we introduce Crab, the first agent benchmark framework designed to support cross-environment tasks, incorporating a graph-based fine-grained evaluation method and an efficient mechanism for task and evaluator construction. Our framework supports multiple devices and can be easily extended to any environment with a Python interface. Leveraging Crab, we developed a cross-platform Crab Benchmark-v0 comprising 120 tasks in computer desktop and mobile phone environments. We evaluated four advanced MLMs using different single and multi-agent system configurations on this benchmark. The experimental results demonstrate that the single agent with GPT-4o achieves the best completion ratio of 38.01%. All framework code, agent code, and task datasets are publicly available at https://github.com/camel-ai/crab.