🤖 AI Summary
This work proposes Deep Research, a real-time interactive multi-agent system for scientific inquiry that overcomes the limitations of conventional closed-loop, batch-oriented AI research paradigms, which typically entail hours-long cycles and preclude human-in-the-loop interaction. By orchestrating specialized agents for planning, data analysis, literature retrieval, and novelty detection—and integrating persistent world states with contextual memory—the system achieves response times on the order of minutes. It supports both semi-autonomous and fully autonomous operation, enabling, for the first time, an interactive multi-agent research workflow amenable to real-time human intervention. Evaluated on the BixBench computational biology benchmark, Deep Research attains 48.8% accuracy on open-ended questions and 64.5% on multiple-choice questions, substantially outperforming existing methods by 14–26 percentage points.
📝 Abstract
Artificial intelligence systems for scientific discovery have demonstrated remarkable potential, yet existing approaches remain largely proprietary and operate in batch-processing modes requiring hours per research cycle, precluding real-time researcher guidance. This paper introduces Deep Research, a multi-agent system enabling interactive scientific investigation with turnaround times measured in minutes. The architecture comprises specialized agents for planning, data analysis, literature search, and novelty detection, unified through a persistent world state that maintains context across iterative research cycles. Two operational modes support different workflows: semi-autonomous mode with selective human checkpoints, and fully autonomous mode for extended investigations. Evaluation on the BixBench computational biology benchmark demonstrated state-of-the-art performance, achieving 48.8% accuracy on open response and 64.4% on multiple-choice evaluation, exceeding existing baselines by 14 to 26 percentage points. Analysis of architectural constraints, including open access literature limitations and challenges inherent to automated novelty assessment, informs practical deployment considerations for AI-assisted scientific workflows.