🤖 AI Summary
Large language models exhibit significant limitations in multimodal reasoning on Chemistry Olympiad problems—comprising molecular structure diagrams, chemical notation, and textual inference—due to poor cross-modal alignment and visual grounding. Method: We introduce the first dedicated multimodal benchmark for Chemistry Olympiads, built on authentic USNCO exam questions with fine-grained structural annotations, and conduct systematic evaluation of 40 open- and closed-source multimodal large models. We identify a pervasive “multimodal fusion failure” phenomenon—where removing images paradoxically improves accuracy—and propose a Chain-of-Thought (CoT) prompting strategy to enhance visual localization and reasoning consistency, validated via occlusion-based interpretability analysis. Contribution/Results: Experiments reveal that state-of-the-art models (e.g., GPT-5, Gemini 2.5 Pro) achieve sub-50% average accuracy; CoT prompting boosts accuracy by 12.3%. This work establishes a new scientific multimodal reasoning benchmark, uncovers a novel failure mode, and provides a transferable optimization framework for domain-specific multimodal understanding.
📝 Abstract
Multimodal scientific reasoning remains a significant challenge for large language models (LLMs), particularly in chemistry, where problem-solving relies on symbolic diagrams, molecular structures, and structured visual data. Here, we systematically evaluate 40 proprietary and open-source multimodal LLMs, including GPT-5, o3, Gemini-2.5-Pro, and Qwen2.5-VL, on a curated benchmark of Olympiad-style chemistry questions drawn from over two decades of U.S. National Chemistry Olympiad (USNCO) exams. These questions require integrated visual and textual reasoning across diverse modalities. We find that many models struggle with modality fusion, where in some cases, removing the image even improves accuracy, indicating misalignment in vision-language integration. Chain-of-Thought prompting consistently enhances both accuracy and visual grounding, as demonstrated through ablation studies and occlusion-based interpretability. Our results reveal critical limitations in the scientific reasoning abilities of current MLLMs, providing actionable strategies for developing more robust and interpretable multimodal systems in chemistry. This work provides a timely benchmark for measuring progress in domain-specific multimodal AI and underscores the need for further advances at the intersection of artificial intelligence and scientific reasoning.