Real-VulLLM: An LLM Based Assessment Framework in the Wild

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates the reliability of large language models (LLMs) for vulnerability detection and attribution reasoning in realistic software security scenarios, aiming to support their practical deployment in secure development. Addressing the limitation of existing evaluations—namely, their detachment from real-world contextual complexity—the authors introduce the first real-world-oriented vulnerability detection evaluation framework. Methodologically, it comprises: (1) multi-strategy, vulnerability-centric prompt templates; (2) a dynamically updated vector database built upon the National Vulnerability Database (NVD) to enable fine-grained, context-aware retrieval; and (3) a dual-dimension scoring mechanism jointly optimizing detection accuracy and reasoning interpretability. Experimental results demonstrate significant improvements: +23.6% in zero-shot/few-shot vulnerability identification rate and +31.4% in attribution plausibility. The framework thus provides both a methodological foundation and empirical validation for the trustworthy integration of LLMs into high-assurance software development pipelines.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) and more specifically Large Language Models (LLMs) have demonstrated exceptional progress in multiple areas including software engineering, however, their capability for vulnerability detection in the wild scenario and its corresponding reasoning remains underexplored. Prompting pre-trained LLMs in an effective way offers a computationally effective and scalable solution. Our contributions are (i)varied prompt designs for vulnerability detection and its corresponding reasoning in the wild. (ii)a real-world vector data store constructed from the National Vulnerability Database, that will provide real time context to vulnerability detection framework, and (iii)a scoring measure for combined measurement of accuracy and reasoning quality. Our contribution aims to examine whether LLMs are ready for wild deployment, thus enabling the reliable use of LLMs stronger for the development of secure software's.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM vulnerability detection capabilities in real-world scenarios
Developing effective prompting strategies for software vulnerability identification
Evaluating combined accuracy and reasoning quality for secure deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt designs for vulnerability detection and reasoning
Real-world vector data store from National Vulnerability Database
Scoring measure combining accuracy and reasoning quality
🔎 Similar Papers
No similar papers found.
R
Rijha Safdar
School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan, 44000
D
Danyail Mateen
Department Computer Science, Fast University, Islamabad, Pakistan, 44000
Syed Taha Ali
Syed Taha Ali
National University of Science and Technology
electronic electionsbody area networkssoftware-defined networkscryptocurrencies
W
Wajahat Hussain
School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan, 44000