Rango: Adaptive Retrieval-Augmented Proving for Automated Software Verification

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the heavy manual proof burden and low automation in Coq formal verification, this paper proposes ProofRetriever—a proof-process-oriented adaptive retrieval-augmented framework. Its core innovation lies in dynamically retrieving project-internal lemmas and semantically similar proof fragments, synergistically integrating fine-grained premise- and proof-level retrieval, project-aware context updating, and Coq syntax-aware embeddings to enhance large language model reasoning in real time. We introduce CoqStoq, the first large-scale open-source Coq dataset comprising 2,226 projects and 197,000 theorems. Evaluated on a rigorous benchmark, ProofRetriever achieves a proof success rate of 32.0%, outperforming the state-of-the-art by 29%. Moreover, incorporating retrieved relevant proofs increases theorem provability by 47%.

Technology Category

Application Category

📝 Abstract
Formal verification using proof assistants, such as Coq, enables the creation of high-quality software. However, the verification process requires significant expertise and manual effort to write proofs. Recent work has explored automating proof synthesis using machine learning and large language models (LLMs). This work has shown that identifying relevant premises, such as lemmas and definitions, can aid synthesis. We present Rango, a fully automated proof synthesis tool for Coq that automatically identifies relevant premises and also similar proofs from the current project and uses them during synthesis. Rango uses retrieval augmentation at every step of the proof to automatically determine which proofs and premises to include in the context of its fine-tuned LLM. In this way, Rango adapts to the project and to the evolving state of the proof. We create a new dataset, CoqStoq, of 2,226 open-source Coq projects and 196,929 theorems from GitHub, which includes both training data and a curated evaluation benchmark of well-maintained projects. On this benchmark, Rango synthesizes proofs for 32.0% of the theorems, which is 29% more theorems than the prior state-of-the-art tool Tactician. Our evaluation also shows that Rango adding relevant proofs to its context leads to a 47% increase in the number of theorems proven.
Problem

Research questions and friction points this paper is trying to address.

Automated Software Verification
Self-Adaptive Method
Information Retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated Proof Method
Machine Learning
Adaptive Verification
🔎 Similar Papers
No similar papers found.