Data and Context Matter: Towards Generalizing AI-based Software Vulnerability Detection

📅 2025-08-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI-based vulnerability detection models exhibit limited generalization to unseen codebases. This paper systematically investigates the impact of model architecture (encoder-only vs. decoder-only), parameter configurations, and training data quality on cross-project vulnerability detection performance, conducting empirical studies on C/C++ code using the large-scale BigVul benchmark. Results demonstrate that data diversity and label quality are critical determinants of generalization capability; encoder-only models significantly outperform decoder-only models in both accuracy and cross-project recall. The proposed approach achieves a 6.8% improvement in recall on BigVul and demonstrates superior generalization across multiple unseen projects. These findings provide both theoretical insights and practical guidelines for developing robust, transferable AI-driven vulnerability detection systems.

Technology Category

Application Category

📝 Abstract
The performance of AI-based software vulnerability detection systems is often limited by their poor generalization to unknown codebases. In this research, we explore the impact of data quality and model architecture on the generalizability of vulnerability detection systems. By generalization we mean ability of high vulnerability detection performance across different C/C++ software projects not seen during training. Through a series of experiments, we demonstrate that improvements in dataset diversity and quality substantially enhance detection performance. Additionally, we compare multiple encoder-only and decoder-only models, finding that encoder based models outperform in terms of accuracy and generalization. Our model achieves 6.8% improvement in recall on the benchmark BigVul[1] dataset, also outperforming on unseen projects, hence showing enhanced generalizability. These results highlight the role of data quality and model selection in the development of robust vulnerability detection systems. Our findings suggest a direction for future systems having high cross-project effectiveness.
Problem

Research questions and friction points this paper is trying to address.

AI vulnerability detection fails to generalize across unseen codebases
Investigating how model architecture and training data affect generalization
Addressing dataset quality issues to improve cross-project vulnerability detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

VulGate dataset removes duplicates and updates vulnerabilities
Encoder-based models outperform others in generalization ability
Improved data quality enhances cross-project vulnerability detection
🔎 Similar Papers
No similar papers found.
R
Rijha Safdar
School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan, 44000
D
Danyail Mateen
Syed Taha Ali
Syed Taha Ali
National University of Science and Technology
electronic electionsbody area networkssoftware-defined networkscryptocurrencies
W
Wajahat Hussain
School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan, 44000
H
Hussain M.Umer
A
Ashfaq