🤖 AI Summary
Existing AI-based vulnerability detection models exhibit limited generalization to unseen codebases. This paper systematically investigates the impact of model architecture (encoder-only vs. decoder-only), parameter configurations, and training data quality on cross-project vulnerability detection performance, conducting empirical studies on C/C++ code using the large-scale BigVul benchmark. Results demonstrate that data diversity and label quality are critical determinants of generalization capability; encoder-only models significantly outperform decoder-only models in both accuracy and cross-project recall. The proposed approach achieves a 6.8% improvement in recall on BigVul and demonstrates superior generalization across multiple unseen projects. These findings provide both theoretical insights and practical guidelines for developing robust, transferable AI-driven vulnerability detection systems.
📝 Abstract
The performance of AI-based software vulnerability detection systems is often limited by their poor generalization to unknown codebases. In this research, we explore the impact of data quality and model architecture on the generalizability of vulnerability detection systems. By generalization we mean ability of high vulnerability detection performance across different C/C++ software projects not seen during training. Through a series of experiments, we demonstrate that improvements in dataset diversity and quality substantially enhance detection performance. Additionally, we compare multiple encoder-only and decoder-only models, finding that encoder based models outperform in terms of accuracy and generalization. Our model achieves 6.8% improvement in recall on the benchmark BigVul[1] dataset, also outperforming on unseen projects, hence showing enhanced generalizability. These results highlight the role of data quality and model selection in the development of robust vulnerability detection systems. Our findings suggest a direction for future systems having high cross-project effectiveness.