🤖 AI Summary
Current evaluations of AI agents in scientific discovery are hindered by the absence of realistic, flexible, and challenging benchmarks. To address this gap, this work introduces COMPOSITE-Stem, a benchmark comprising 70 open-ended tasks crafted by PhD-level researchers across physics, biology, chemistry, and mathematics. The framework innovatively combines expert-authored open tasks with an LLM-as-a-jury evaluation mechanism, integrating exact matching, rubric-based scoring, and the multimodal Terminus-2 agent architecture to enable fine-grained, multidimensional assessment on the Harbor platform. Experimental results reveal that even state-of-the-art models achieve only a 21% success rate, underscoring a significant capability gap in complex scientific reasoning among current AI systems.
📝 Abstract
AI agents hold growing promise for accelerating scientific discovery; yet, a lack of frontier evaluations hinders adoption into real workflows. Expert-written benchmarks have proven effective at measuring AI reasoning, but most at this stage have become saturated and only measure performance on constrained outputs. To help address this gap, we introduce COMPOSITE-STEM, a benchmark of 70 expert-written tasks in physics, biology, chemistry, and mathematics, curated by doctoral-level researchers. Our benchmark combines exact-match grading and criterion-based rubrics with an LLM-as-a-jury grading protocol, allowing more flexible assessment of scientifically meaningful outputs. Using an adapted multimodal Terminus-2 agent harness within the Harbor agentic evaluation framework, we evaluate four frontier models. The top-performing model achieves 21%, demonstrating that COMPOSITE-STEM captures capabilities beyond current agent reach. All tasks are open-sourced with contributor permission to support reproducibility and to promote additional research towards AI's acceleration of scientific progress in these domains.