🤖 AI Summary
Current large language models (LLMs) lack systematic evaluation frameworks and high-quality benchmarks tailored to mathematical creativity. Method: This paper introduces the first multidimensional assessment standard for mathematical creativity and constructs DeepMath-Creative—a dedicated benchmark encompassing constructive and open-ended problems across algebra, geometry, and analysis. The benchmark features human-crafted problems and a hierarchical scoring scheme that emphasizes correctness of core solution components while tolerating minor imperfections. Standardized evaluations are conducted across leading open- and closed-weight LLMs. Contribution/Results: Empirical findings reveal that LLMs’ “creativity” is largely pattern recombination rather than genuine innovation. Even the top-performing model, O3 Mini, achieves only 70% accuracy on foundational undergraduate-level problems under lenient scoring; it consistently fails to generate substantive strategies for complex or open-ended tasks—exposing fundamental limitations in LLMs’ mathematical creativity.
📝 Abstract
To advance the mathematical proficiency of large language models (LLMs), the DeepMath team has launched an open-source initiative aimed at developing an open mathematical LLM and systematically evaluating its mathematical creativity. This paper represents the initial contribution of this initiative. While recent developments in mathematical LLMs have predominantly emphasized reasoning skills, as evidenced by benchmarks on elementary to undergraduate-level mathematical tasks, the creative capabilities of these models have received comparatively little attention, and evaluation datasets remain scarce. To address this gap, we propose an evaluation criteria for mathematical creativity and introduce DeepMath-Creative, a novel, high-quality benchmark comprising constructive problems across algebra, geometry, analysis, and other domains. We conduct a systematic evaluation of mainstream LLMs' creative problem-solving abilities using this dataset. Experimental results show that even under lenient scoring criteria -- emphasizing core solution components and disregarding minor inaccuracies, such as small logical gaps, incomplete justifications, or redundant explanations -- the best-performing model, O3 Mini, achieves merely 70% accuracy, primarily on basic undergraduate-level constructive tasks. Performance declines sharply on more complex problems, with models failing to provide substantive strategies for open problems. These findings suggest that, although current LLMs display a degree of constructive proficiency on familiar and lower-difficulty problems, such performance is likely attributable to the recombination of memorized patterns rather than authentic creative insight or novel synthesis.