🤖 AI Summary
To address the low efficiency and high expertise barrier of manual formal verification, this paper proposes a two-stage framework integrating full-proof generation with strategy-driven refinement. Methodologically: (1) A fine-tuned LLM (Llama/Mistral) generates complete Isabelle proofs and extracts structured proof sketches; (2) These sketches are refined stepwise at the tactic level using Isabelle’s automation infrastructure. Our key contribution is the first end-to-end co-optimization of full-proof synthesis and tactic-level strategy generation within Isabelle, accompanied by the release of a high-quality dataset and models. Evaluated on the miniF2F benchmark, our approach achieves a 59.4% proof success rate—significantly surpassing the prior state-of-the-art (56.1%). Ablation studies confirm that the synergistic interaction between the two stages is the primary driver of performance improvement.
📝 Abstract
Formal methods is pivotal for verifying the reliability of critical systems through rigorous mathematical proofs. However, its adoption is hindered by labor-intensive manual proofs and the expertise required to use theorem provers. Recent advancements in large language models (LLMs) offer new opportunities for automated theorem proving. Two promising approaches are generating tactics step by step and generating a whole proof directly with an LLM. However, existing work makes no attempt to combine the two approaches. In this work, we introduce HybridProver, a dual-model proof synthesis framework that combines tactic-based generation and whole-proof synthesis to harness the benefits of both approaches. HybridProver generates whole proof candidates for evaluation directly, then extracts proof sketches from those candidates. It then uses a tactic-based generation model that integrates automated tools to complete the sketches via stepwise refinement. We implement HybridProver for the Isabelle theorem prover and fine-tune LLMs on our optimized Isabelle datasets. Evaluation on the miniF2F dataset illustrates HybridProver's effectiveness. We achieve a 59.4% success rate on miniF2F, where the previous SOTA is 56.1%. Our ablation studies show that this SOTA result is attributable to combining whole-proof and tactic-based generation. Additionally, we show how the dataset quality, training parameters, and sampling diversity affect the final result during automated theorem proving with LLMs. All of our code, datasets, and LLMs are open source.