HybridProver: Augmenting Theorem Proving with LLM-Driven Proof Synthesis and Refinement

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency and high expertise barrier of manual formal verification, this paper proposes a two-stage framework integrating full-proof generation with strategy-driven refinement. Methodologically: (1) A fine-tuned LLM (Llama/Mistral) generates complete Isabelle proofs and extracts structured proof sketches; (2) These sketches are refined stepwise at the tactic level using Isabelle’s automation infrastructure. Our key contribution is the first end-to-end co-optimization of full-proof synthesis and tactic-level strategy generation within Isabelle, accompanied by the release of a high-quality dataset and models. Evaluated on the miniF2F benchmark, our approach achieves a 59.4% proof success rate—significantly surpassing the prior state-of-the-art (56.1%). Ablation studies confirm that the synergistic interaction between the two stages is the primary driver of performance improvement.

Technology Category

Application Category

📝 Abstract
Formal methods is pivotal for verifying the reliability of critical systems through rigorous mathematical proofs. However, its adoption is hindered by labor-intensive manual proofs and the expertise required to use theorem provers. Recent advancements in large language models (LLMs) offer new opportunities for automated theorem proving. Two promising approaches are generating tactics step by step and generating a whole proof directly with an LLM. However, existing work makes no attempt to combine the two approaches. In this work, we introduce HybridProver, a dual-model proof synthesis framework that combines tactic-based generation and whole-proof synthesis to harness the benefits of both approaches. HybridProver generates whole proof candidates for evaluation directly, then extracts proof sketches from those candidates. It then uses a tactic-based generation model that integrates automated tools to complete the sketches via stepwise refinement. We implement HybridProver for the Isabelle theorem prover and fine-tune LLMs on our optimized Isabelle datasets. Evaluation on the miniF2F dataset illustrates HybridProver's effectiveness. We achieve a 59.4% success rate on miniF2F, where the previous SOTA is 56.1%. Our ablation studies show that this SOTA result is attributable to combining whole-proof and tactic-based generation. Additionally, we show how the dataset quality, training parameters, and sampling diversity affect the final result during automated theorem proving with LLMs. All of our code, datasets, and LLMs are open source.
Problem

Research questions and friction points this paper is trying to address.

Combines tactic-based and whole-proof synthesis for theorem proving
Addresses labor-intensive manual proofs in formal methods
Improves success rate in automated theorem proving with LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines tactic-based and whole-proof generation
Extracts proof sketches from whole candidates
Uses stepwise refinement with automated tools
🔎 Similar Papers
No similar papers found.
Jilin Hu
Jilin Hu
Professor, East China Normal University
Spatial-Temporal DataMachine LearningTransportation
J
Jianyu Zhang
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Y
Yongwang Zhao
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Talia Ringer
Talia Ringer
Assistant Professor, University of Illinois at Urbana-Champaign
Proof EngineeringProgramming LanguagesVerificationProof AutomationDependent Types