π€ AI Summary
Existing research lacks a systematic evaluation of key capabilities of Deep Research (DR) agents in corporate financial analysis.
Method: We introduce FinDeepResearch, the first fine-grained, cross-lingual, multi-market benchmark for DR agents, coupled with HisRubricβa hierarchical evaluation framework modeling professional analyst reasoning across data identification, metric computation, and strategic interpretation. We conduct multi-agent comparative experiments integrating LLMs, deep reasoning, and web search across 64 publicly listed companies in eight markets.
Contribution/Results: Our evaluation yields 15,808 fine-grained scores, revealing critical capability gaps in cross-lingual and cross-market financial reasoning. All benchmark data, evaluation code, and results are fully open-sourced, establishing a standardized, reproducible infrastructure to advance trustworthy development of DR agents.
π Abstract
Deep Research (DR) agents, powered by advanced Large Language Models (LLMs), have recently garnered increasing attention for their capability in conducting complex research tasks. However, existing literature lacks a rigorous and systematic evaluation of DR Agent's capabilities in critical research analysis. To address this gap, we first propose HisRubric, a novel evaluation framework with a hierarchical analytical structure and a fine-grained grading rubric for rigorously assessing DR agents' capabilities in corporate financial analysis. This framework mirrors the professional analyst's workflow, progressing from data recognition to metric calculation, and finally to strategic summarization and interpretation. Built on this framework, we construct a FinDeepResearch benchmark that comprises 64 listed companies from 8 financial markets across 4 languages, encompassing a total of 15,808 grading items. We further conduct extensive experiments on the FinDeepResearch using 16 representative methods, including 6 DR agents, 5 LLMs equipped with both deep reasoning and search capabilities, and 5 LLMs with deep reasoning capabilities only. The results reveal the strengths and limitations of these approaches across diverse capabilities, financial markets, and languages, offering valuable insights for future research and development. The benchmark and evaluation code will be made publicly available.