🤖 AI Summary
Existing vision-language models (VLMs) exhibit fundamental limitations in scientific reasoning—particularly in mathematics and physics—due to static, non-interactive benchmarks that lack intermediate reasoning steps, fail to verify scientific correctness, and neglect symbolic manipulation and formal rule adherence.
Method: We introduce PRiSM, the first agent-driven, dynamic multimodal scientific reasoning benchmark, comprising over 24,000 university-level math and physics problems. It supports dynamic image-text inputs, executable Python code generation, and structured stepwise reasoning. We further propose PrismAgent, a framework integrating programmatic ground-truth verification with five fine-grained evaluation tasks—including generalization, perturbation robustness, and ambiguity resolution—to rigorously assess conceptual understanding, symbolic reasoning, and logical self-correction.
Results: Experiments expose systemic deficiencies in current VLMs’ scientific reasoning capabilities and demonstrate PRiSM’s effectiveness in diagnosing uncertainty propagation and error cascades.
📝 Abstract
Evaluating vision-language models (VLMs) in scientific domains like mathematics and physics poses unique challenges that go far beyond predicting final answers. These domains demand conceptual understanding, symbolic reasoning, and adherence to formal laws, requirements that most existing benchmarks fail to address. In particular, current datasets tend to be static, lacking intermediate reasoning steps, robustness to variations, or mechanisms for verifying scientific correctness. To address these limitations, we introduce PRiSM, a synthetic, fully dynamic, and multimodal benchmark for evaluating scientific reasoning via grounded Python code. PRiSM includes over 24,750 university-level physics and math problems, and it leverages our scalable agent-based pipeline, PrismAgent, to generate well-structured problem instances. Each problem contains dynamic textual and visual input, a generated figure, alongside rich structured outputs: executable Python code for ground truth generation and verification, and detailed step-by-step reasoning. The dynamic nature and Python-powered automated ground truth generation of our benchmark allow for fine-grained experimental auditing of multimodal VLMs, revealing failure modes, uncertainty behaviors, and limitations in scientific reasoning. To this end, we propose five targeted evaluation tasks covering generalization, symbolic program synthesis, perturbation robustness, reasoning correction, and ambiguity resolution. Through comprehensive evaluation of existing VLMs, we highlight their limitations and showcase how PRiSM enables deeper insights into their scientific reasoning capabilities.