LiquiLM: Bridging the Semantic Gap in Liquidity Flaw Audit via DCN and LLMs

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of detecting hidden liquidity vulnerabilities in Proof of Liquidity (PoL) mechanisms—vulnerabilities arising from intricate economic logic that traditional methods fail to identify, thereby threatening DeFi security. To bridge the semantic gap between smart contract implementations and high-level liquidity intents, this work proposes a novel approach that integrates large language models (e.g., GPT-4o, Gemini 3 Pro) with a Dynamic Collaborative Attention Network (DCN), enabling dynamic semantic alignment between code and economic specifications. The method substantially enhances both the accuracy and interpretability of vulnerability detection, achieving an F1-score exceeding 90% on a benchmark of 1,490 verified contracts. In real-world audits of 1,380 PoL and Ethereum economic contracts, it uncovered 238 high-risk cases and contributed to the confirmation of 10 CVE-listed vulnerabilities.
📝 Abstract
Traditional consensus mechanisms, such as Proof of Stake (PoS), increasingly reveal an excessive dependency on large liquidity providers. Although the Proof of Liquidity (PoL) mechanism serves as a critical paradigm for incentivizing sustained liquidity provision and ensuring market stability, its transition from asset staking to active liquidity management significantly increases the complexity of underlying smart contract economic models and interaction logic. This renders hidden liquidity logic flaws difficult to detect via traditional methods, seriously threatening the system stability and user asset security of mainstream DeFi and emerging PoL ecosystems. To address this, we propose the LiquiLM framework, which integrates Large Language Models (LLMs) with a Dynamic Co-Attention Network (DCN). By establishing a dynamic interaction between liquidity-critical contracts and flaw descriptions, the framework effectively bridges the semantic gap between underlying code implementations and high-level liquidity intents. We evaluate the performance of LiquiLM on 1,490 validation contracts (covering precision, recall, specificity, and F1-score). The results show that it achieves significant effectiveness in auditing and explaining liquidity flaws: in experiments using Gemini 3 Pro and GPT-4o as backbone models, respectively, the F1-scores both exceed 90%. Furthermore, through an in-depth audit of 1,380 real-world PoL and Ethereum economic contracts, LiquiLM successfully identifies 238 high-risk contracts and assists in discovering 10 vulnerabilities that have received CVE certification.
Problem

Research questions and friction points this paper is trying to address.

liquidity flaw
Proof of Liquidity
smart contract
semantic gap
DeFi security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Liquidity Flaw Audit
Large Language Models
Dynamic Co-Attention Network
Proof of Liquidity
Semantic Gap Bridging
🔎 Similar Papers
No similar papers found.