🤖 AI Summary
Current large language models lack objective evaluation of their information synthesis capabilities in deep research scenarios, particularly when generating coherent long-form content from numerous retrieved results, often relying on subjective judgment. This work proposes DeepSynth-Eval, a novel benchmark that reverse-engineers research queries and “Oracle Contexts” from high-quality review papers to decouple retrieval from synthesis and isolate the assessment of integration ability. By leveraging high-fidelity contexts, a plan-and-write agent pipeline, and fine-grained checklists, the framework translates subjective writing quality into quantifiable metrics that jointly capture factual coverage and structural adherence. Experiments across 96 tasks demonstrate that multi-turn plan-and-write agents significantly outperform single-pass generation, notably reducing hallucinations and better satisfying complex structural constraints.
📝 Abstract
The evolution of Large Language Models (LLMs) towards autonomous agents has catalyzed progress in Deep Research. While retrieval capabilities are well-benchmarked, the post-retrieval synthesis stage--where agents must digest massive amounts of context and consolidate fragmented evidence into coherent, long-form reports--remains under-evaluated due to the subjectivity of open-ended writing. To bridge this gap, we introduce DeepSynth-Eval, a benchmark designed to objectively evaluate information consolidation capabilities. We leverage high-quality survey papers as gold standards, reverse-engineering research requests and constructing"Oracle Contexts"from their bibliographies to isolate synthesis from retrieval noise. We propose a fine-grained evaluation protocol using General Checklists (for factual coverage) and Constraint Checklists (for structural organization), transforming subjective judgment into verifiable metrics. Experiments across 96 tasks reveal that synthesizing information from hundreds of references remains a significant challenge. Our results demonstrate that agentic plan-and-write workflows significantly outperform single-turn generation, effectively reducing hallucinations and improving adherence to complex structural constraints.