๐ค AI Summary
Existing research lacks a systematic benchmark for evaluating the capabilities of Deep Research Agents (DRAs). Method: We introduce DRA-Bench, the first comprehensive benchmark for DRAs, comprising 100 doctoral-level research tasks spanning 22 domains, emphasizing multi-step web exploration, precise information retrieval, and high-order synthesis. We propose two novel evaluation methodologies: (1) an adaptive reference report quality assessment framework that dynamically aligns with expert-defined standards; and (2) an information acquisition evaluation framework quantifying effective citation count and citation accuracy, achieving high inter-rater agreement with human experts (Cohenโs ฮบ = 0.89). Contribution/Results: DRA-Bench integrates expert-crafted tasks, citation-driven quantitative evaluation, adaptive scoring criteria, and an open-source toolchain. The full dataset and evaluation framework are publicly released, enabling reproducible, scalable DRA capability assessment and advancing LLM-based research agents toward practical deployment and trustworthiness.
๐ Abstract
Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, citation-rich reports--compressing hours of manual desk research into minutes. However, a comprehensive benchmark for systematically evaluating the capabilities of these agents remains absent. To bridge this gap, we present DeepResearch Bench, a benchmark consisting of 100 PhD-level research tasks, each meticulously crafted by domain experts across 22 distinct fields. Evaluating DRAs is inherently complex and labor-intensive. We therefore propose two novel methodologies that achieve strong alignment with human judgment. The first is a reference-based method with adaptive criteria to assess the quality of generated research reports. The other framework is introduced to evaluate DRA's information retrieval and collection capabilities by assessing its effective citation count and overall citation accuracy. We have open-sourced DeepResearch Bench and key components of these frameworks at https://github.com/Ayanami0730/deep_research_bench to accelerate the development of practical LLM-based agents.