PRiSM: An Agentic Multimodal Benchmark for Scientific Reasoning via Python-Grounded Evaluation

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) exhibit fundamental limitations in scientific reasoning—particularly in mathematics and physics—due to static, non-interactive benchmarks that lack intermediate reasoning steps, fail to verify scientific correctness, and neglect symbolic manipulation and formal rule adherence. Method: We introduce PRiSM, the first agent-driven, dynamic multimodal scientific reasoning benchmark, comprising over 24,000 university-level math and physics problems. It supports dynamic image-text inputs, executable Python code generation, and structured stepwise reasoning. We further propose PrismAgent, a framework integrating programmatic ground-truth verification with five fine-grained evaluation tasks—including generalization, perturbation robustness, and ambiguity resolution—to rigorously assess conceptual understanding, symbolic reasoning, and logical self-correction. Results: Experiments expose systemic deficiencies in current VLMs’ scientific reasoning capabilities and demonstrate PRiSM’s effectiveness in diagnosing uncertainty propagation and error cascades.

Technology Category

Application Category

📝 Abstract
Evaluating vision-language models (VLMs) in scientific domains like mathematics and physics poses unique challenges that go far beyond predicting final answers. These domains demand conceptual understanding, symbolic reasoning, and adherence to formal laws, requirements that most existing benchmarks fail to address. In particular, current datasets tend to be static, lacking intermediate reasoning steps, robustness to variations, or mechanisms for verifying scientific correctness. To address these limitations, we introduce PRiSM, a synthetic, fully dynamic, and multimodal benchmark for evaluating scientific reasoning via grounded Python code. PRiSM includes over 24,750 university-level physics and math problems, and it leverages our scalable agent-based pipeline, PrismAgent, to generate well-structured problem instances. Each problem contains dynamic textual and visual input, a generated figure, alongside rich structured outputs: executable Python code for ground truth generation and verification, and detailed step-by-step reasoning. The dynamic nature and Python-powered automated ground truth generation of our benchmark allow for fine-grained experimental auditing of multimodal VLMs, revealing failure modes, uncertainty behaviors, and limitations in scientific reasoning. To this end, we propose five targeted evaluation tasks covering generalization, symbolic program synthesis, perturbation robustness, reasoning correction, and ambiguity resolution. Through comprehensive evaluation of existing VLMs, we highlight their limitations and showcase how PRiSM enables deeper insights into their scientific reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating vision-language models' scientific reasoning beyond final answers
Addressing limitations of static datasets lacking intermediate reasoning steps
Providing automated verification of scientific correctness through Python code
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic multimodal benchmark with Python verification
Agent-based pipeline for scalable problem generation
Automated ground truth via executable Python code
🔎 Similar Papers
No similar papers found.
S
Shima Imani
Meta Reality Lab
Seungwhan Moon
Seungwhan Moon
Facebook, Carnegie Mellon University
Dialog SystemsTransfer LearningMultimodal LearningNatural Language Processing
Adel Ahmadyan
Adel Ahmadyan
Meta Reality Lab
L
Lu Zhang
Meta Reality Lab
K
Kirmani Ahmed
Meta Reality Lab
B
Babak Damavandi
Meta Reality Lab