🤖 AI Summary
Existing geometric problem-solving (GPS) benchmarks overlook auxiliary-line construction and fine-grained process evaluation, hindering rigorous assessment of multimodal large language models’ (MLLMs) long-step reasoning capabilities.
Method: We introduce GeoLaux—the first benchmark explicitly designed for auxiliary-line-dependent multi-step geometric reasoning—comprising 2,186 problems with an average of 6.51 reasoning steps, 41.8% requiring auxiliary lines, and covering both computational and proof-based tasks. We propose a novel five-dimensional evaluation framework quantifying answer correctness, reasoning process quality, auxiliary-line plausibility, impact magnitude, and error attribution, grounded in human annotations and model outputs to enable interpretable assessment of reasoning paths and auxiliary-line construction.
Results: Experiments across 13 state-of-the-art MLLMs reveal that nine suffer >50% performance degradation in long-step reasoning and exhibit pervasive deficits in auxiliary-line awareness; targeted enhancement of this capability yields substantial improvements in overall geometric reasoning performance.
📝 Abstract
Geometry problem solving (GPS) requires models to master diagram comprehension, logical reasoning, knowledge application, numerical computation, and auxiliary line construction. This presents a significant challenge for Multimodal Large Language Models (MLLMs). However, existing benchmarks for evaluating MLLM geometry skills overlook auxiliary line construction and lack fine-grained process evaluation, making them insufficient for assessing MLLMs' long-step reasoning abilities. To bridge these gaps, we present the GeoLaux benchmark, comprising 2,186 geometry problems, incorporating both calculation and proving questions. Notably, the problems require an average of 6.51 reasoning steps, with a maximum of 24 steps, and 41.8% of them need auxiliary line construction. Building on the dataset, we design a novel five-dimensional evaluation strategy assessing answer correctness, process correctness, process quality, auxiliary line impact, and error causes. Extensive experiments on 13 leading MLLMs (including thinking models and non-thinking models) yield three pivotal findings: First, models exhibit substantial performance degradation in extended reasoning steps (nine models demonstrate over 50% performance drop). Second, compared to calculation problems, MLLMs tend to take shortcuts when solving proving problems. Third, models lack auxiliary line awareness, and enhancing this capability proves particularly beneficial for overall geometry reasoning improvement. These findings establish GeoLaux as both a benchmark for evaluating MLLMs' long-step geometric reasoning with auxiliary lines and a guide for capability advancement. Our dataset and code are included in supplementary materials and will be released.