LLMs as verification oracles for Solidity

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Business logic errors in smart contracts are a leading cause of substantial financial losses, yet existing formal verification tools suffer from high learning barriers and limited expressiveness of specification languages. Method: This paper presents the first systematic evaluation of reasoning-capable large language models (e.g., GPT-5) as oracles for Solidity contract verification, proposing a novel “AI + formal methods” hybrid paradigm. We design a mixed quantitative–qualitative evaluation framework, benchmarking LLM outputs against industrial-grade tools (e.g., SolCMC, Certora) on real-world audit tasks. Results: Empirical evaluation demonstrates that GPT-5 effectively detects complex logical vulnerabilities in realistic auditing scenarios, achieving performance comparable to specialized formal verifiers. Our core contribution is establishing LLMs as lightweight, scalable verification oracles—overcoming key usability bottlenecks of traditional formal methods—and thereby opening a new, practical pathway for enhancing smart contract security.

Technology Category

Application Category

📝 Abstract
Ensuring the correctness of smart contracts is critical, as even subtle flaws can lead to severe financial losses. While bug detection tools able to spot common vulnerability patterns can serve as a first line of defense, most real-world exploits and losses stem from errors in the contract business logic. Formal verification tools such as SolCMC and the Certora Prover address this challenge, but their impact remains limited by steep learning curves and restricted specification languages. Recent works have begun to explore the use of large language models (LLMs) for security-related tasks such as vulnerability detection and test generation. Yet, a fundamental question remains open: can LLMs serve as verification oracles, capable of reasoning about arbitrary contract-specific properties? In this paper, we provide the first systematic evaluation of GPT-5, a state-of-the-art reasoning LLM, in this role. We benchmark its performance on a large dataset of verification tasks, compare its outputs against those of established formal verification tools, and assess its practical effectiveness in real-world auditing scenarios. Our study combines quantitative metrics with qualitative analysis, and shows that recent reasoning-oriented LLMs can be surprisingly effective as verification oracles, suggesting a new frontier in the convergence of AI and formal methods for secure smart contract development and auditing.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs as verification oracles for smart contract correctness
Addressing limitations of formal verification tools for business logic errors
Assessing LLM capability to reason about arbitrary contract-specific properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GPT-5 as a verification oracle
Benchmarks LLM performance on verification tasks
Applies AI reasoning for smart contract security
🔎 Similar Papers
No similar papers found.