🤖 AI Summary
Existing benchmarks inadequately evaluate autonomous agents’ capability to perform multimodal economic tasks—such as macroeconomic analysis, financial forecasting, and policy assessment—in realistic web environments, particularly overlooking authoritative data fidelity and web interaction grounding.
Method: We introduce EconWebArena, the first benchmark designed for real-world web-based economic tasks, comprising 360 multimodal tasks across 82 authoritative websites. It requires agents to perform web navigation, multimodal (text-image) understanding, interactive operations, and timely data extraction. We propose a novel “human-curated + multi-LLM collaborative” task generation paradigm and establish rigorous evaluation criteria for Economic Web Intelligence. Evaluation is conducted via an MLLM-driven framework integrating vision-grounded perception, stepwise planning, and realistic interaction simulation.
Results: Experiments expose critical limitations of current MM-LLMs in robust web navigation, multimodal economic reasoning, and faithful retrieval of authoritative data. We validate the pivotal roles of visual input, planning modules, and interaction design, thereby establishing a new evaluation baseline for Economic Web Intelligence.
📝 Abstract
We introduce EconWebArena, a benchmark for evaluating autonomous agents on complex, multimodal economic tasks in realistic web environments. The benchmark comprises 360 curated tasks from 82 authoritative websites spanning domains such as macroeconomics, labor, finance, trade, and public policy. Each task challenges agents to navigate live websites, interpret structured and visual content, interact with real interfaces, and extract precise, time-sensitive data through multi-step workflows. We construct the benchmark by prompting multiple large language models (LLMs) to generate candidate tasks, followed by rigorous human curation to ensure clarity, feasibility, and source reliability. Unlike prior work, EconWebArena emphasizes fidelity to authoritative data sources and the need for grounded web-based economic reasoning. We evaluate a diverse set of state-of-the-art multimodal LLMs as web agents, analyze failure cases, and conduct ablation studies to assess the impact of visual grounding, plan-based reasoning, and interaction design. Our results reveal substantial performance gaps and highlight persistent challenges in grounding, navigation, and multimodal understanding, positioning EconWebArena as a rigorous testbed for economic web intelligence.