๐ค AI Summary
To address the low reasoning accuracy and poor interpretability of large language models (LLMs) in Lean-based formal theorem proving, this paper proposes LemmaHeadโthe first domain-specific retrieval-augmented generation (RAG) system that integrates structured mathematical textbook knowledge. Our method introduces a textbook-aware context retrieval mechanism and a theorem-proving-oriented prompt injection strategy, enabling fine-grained lemma matching and semantic alignment. LemmaHead constructs a comprehensive Lean knowledge base covering core mathematical domains, combining embedding-based semantic retrieval with LLM-driven collaborative generation. Experimental evaluation on multiple Lean benchmark tasks demonstrates that LemmaHead significantly improves proof completion rate (+28.3%) and correctness rate (+34.7%), empirically validating the substantial enhancement conferred by textbook-level prior knowledge for formal mathematical reasoning.
๐ Abstract
Developing the logic necessary to solve mathematical problems or write mathematical proofs is one of the more difficult objectives for large language models (LLMS). Currently, the most popular methods in literature consists of fine-tuning the model on written mathematical content such as academic publications and textbooks, so that the model can learn to emulate the style of mathematical writing. In this project, we explore the effectiveness of using retrieval augmented generation (RAG) to address gaps in the mathematical reasoning of LLMs. We develop LemmaHead, a RAG knowledge base that supplements queries to the model with relevant mathematical context, with particular focus on context from published textbooks. To measure our model's performance in mathematical reasoning, our testing paradigm focuses on the task of automated theorem proving via generating proofs to a given mathematical claim in the Lean formal language.