🤖 AI Summary
Automatic formalization is hindered by the scarcity of parallel corpora pairing natural-language mathematics with formal proofs. Method: This paper proposes ATLAS, an iterative data generation framework introducing the novel “lift–enhance–synthesize” paradigm—comprising large-model iterative self-distillation, formal-grammar-guided augmentation, cross-granularity theorem alignment, and multi-round feedback-based quality control. Contribution/Results: ATLAS constructs the first high-quality, undergraduate-level parallel corpus of 300K theorem statements. Fine-tuning on this data yields the ATLAS theorem translator, achieving 80.59% pass@8 on ProofNet—a 56.6-percentage-point improvement over prior baselines—and establishing new state-of-the-art results on both miniF2F and the newly introduced graduate-level MathQual benchmark. This work advances both the data foundation and modeling capabilities for automatic formalization.
📝 Abstract
Autoformalization, the process of automatically translating natural language mathematics into machine-verifiable formal language, has demonstrated advancements with the progress of large language models (LLMs). However, a key obstacle to further advancements is the scarcity of paired datasets that align natural language with formal language. To address this challenge, we introduce ATLAS (Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data), an iterative data generation framework designed to produce large-scale, high-quality parallel theorem statements. With the proposed ATLAS running for 10 iterations, we construct an undergraduate-level dataset comprising 300k theorem statements and develop the ATLAS translator, achieving accuracies of 80.59% (pass@8) and 92.99% (pass@128) on ProofNet, significantly outperforming the base model (23.99% and 47.17%) and InternLM2-Math-Plus-7B (50.94% and 80.32%). Furthermore, the ATLAS translator also achieves state-of-the-art performance on both the high-school-level miniF2F dataset and the graduate-level MathQual dataset introduced in this work. The datasets, model, and code will be released to the public soon.