🤖 AI Summary
Current multimodal large language models (MLLMs) lack rigorous evaluation of their understanding and generation capabilities regarding fine-grained geometric optics principles. Method: We introduce GOBench—the first benchmark dedicated to geometric optics—comprising two core tasks: optical image generation and phenomenon understanding. We propose novel evaluation dimensions, including optical fidelity and instruction adherence, release the GOBench-Gen-1k synthetic generation dataset, and establish a standardized understanding evaluation protocol. Our methodology integrates high-quality scene-aware prompting, human subjective assessment, domain-specific evaluation instructions, and comparative testing across 11 state-of-the-art MLLMs. Results: Experiments reveal pervasive principle-level errors: GPT-4o-Image fails to achieve optical fidelity in generation, and Gemini-2.5-Pro attains only 37.35% accuracy on understanding tasks—demonstrating severe capability gaps. This work establishes the first physics-grounded, multimodal evaluation framework for geometric optics, addressing a critical void in the field.
📝 Abstract
The rapid evolution of Multi-modality Large Language Models (MLLMs) is driving significant advancements in visual understanding and generation. Nevertheless, a comprehensive assessment of their capabilities, concerning the fine-grained physical principles especially in geometric optics, remains underexplored. To address this gap, we introduce GOBench, the first benchmark to systematically evaluate MLLMs' ability across two tasks: 1) Generating Optically Authentic Imagery and 2) Understanding Underlying Optical Phenomena. We curates high-quality prompts of geometric optical scenarios and use MLLMs to construct GOBench-Gen-1k dataset.We then organize subjective experiments to assess the generated imagery based on Optical Authenticity, Aesthetic Quality, and Instruction Fidelity, revealing MLLMs' generation flaws that violate optical principles. For the understanding task, we apply crafted evaluation instructions to test optical understanding ability of eleven prominent MLLMs. The experimental results demonstrate that current models face significant challenges in both optical generation and understanding. The top-performing generative model, GPT-4o-Image, cannot perfectly complete all generation tasks, and the best-performing MLLM model, Gemini-2.5Pro, attains a mere 37.35% accuracy in optical understanding.