🤖 AI Summary
Geometric theorem proving systems have long suffered from inconsistent formal representations, poor cross-domain interoperability, and overreliance on informal diagrammatic reasoning for verification. To address these challenges, we introduce LeanGeo—a novel, competition-grade formal framework built on Lean 4 and Mathlib—that enables precise encoding and machine-checked verification of higher-order geometric theorems. We release LeanGeo-Bench, an open-source benchmark suite covering representative problems from the International Mathematical Olympiad (IMO) and other high-stakes competitions, marking the first complete formalization of IMO-level geometry problems. This work establishes the first verifiable, extensible, and reusable infrastructure for automated geometric reasoning. Furthermore, it empirically exposes critical limitations of large language models in structured geometric deduction and introduces a new paradigm—along with rigorous evaluation criteria—for synergistic research at the intersection of formal mathematics and AI.
📝 Abstract
Geometry problems are a crucial testbed for AI reasoning capabilities. Most existing geometry solving systems cannot express problems within a unified framework, thus are difficult to integrate with other mathematical fields. Besides, since most geometric proofs rely on intuitive diagrams, verifying geometry problems is particularly challenging. To address these gaps, we introduce LeanGeo, a unified formal system for formalizing and solving competition-level geometry problems within the Lean 4 theorem prover. LeanGeo features a comprehensive library of high-level geometric theorems with Lean's foundational logic, enabling rigorous proof verification and seamless integration with Mathlib. We also present LeanGeo-Bench, a formal geometry benchmark in LeanGeo, comprising problems from the International Mathematical Olympiad (IMO) and other advanced sources. Our evaluation demonstrates the capabilities and limitations of state-of-the-art Large Language Models on this benchmark, highlighting the need for further advancements in automated geometric reasoning. We open source the theorem library and the benchmark of LeanGeo at https://github.com/project-numina/LeanGeo/tree/master.