🤖 AI Summary
High noise in vulnerability-fixing commits (VFCs)—with mislabeling rates of 40%–75%—severely impairs precise vulnerability localization. Method: We propose VulSifter, an LLM-driven heuristic framework that enables automatic, function-level identification and filtering of vulnerable code—the first approach to achieve such fine-grained accuracy. By synergistically integrating large language models’ semantic understanding with multi-dimensional heuristic rules, VulSifter effectively discriminates genuine vulnerability fixes from irrelevant code changes. Contribution/Results: VulSifter constructs CleanVul, a high-quality dataset comprising 11,632 verified vulnerable functions with 90.6% accuracy. CleanVul supports fine-grained function-level analysis and scalable repository crawling (>127K repositories). When used for LLM fine-tuning, it boosts performance on PrimeVul to an F1-score of 0.82—surpassing models trained solely on PrimeVul—and significantly enhances generalization capability.
📝 Abstract
Accurate identification of software vulnerabilities is crucial for system integrity. Vulnerability datasets, often derived from the National Vulnerability Database (NVD) or directly from GitHub, are essential for training machine learning models to detect these security flaws. However, these datasets frequently suffer from significant noise, typically 40% to 75%, due primarily to the automatic and indiscriminate labeling of all changes in vulnerability-fixing commits (VFCs) as vulnerability-related. This misclassification occurs because not all changes in a commit aimed at fixing vulnerabilities pertain to security threats; many are routine updates like bug fixes or test improvements. This paper introduces the first methodology that uses the Large Language Model (LLM) with a heuristic enhancement to automatically identify vulnerability-fixing changes from VFCs, achieving an F1-score of 0.82. VulSifter was applied to a large-scale study, where we conducted a crawl of 127,063 repositories on GitHub, resulting in the acquisition of 5,352,105 commits. VulSifter involves utilizing an LLM to comprehend code semantics and contextual information, while applying heuristics to filter out unrelated changes. We then developed CleanVul, a high-quality dataset comprising 11,632 functions using our LLM heuristic enhancement approach, demonstrating Correctness (90.6%) comparable to established datasets such as SVEN and PrimeVul. To evaluate the CleanVul dataset, we conducted experiments focusing on fine-tuning various LLMs on CleanVul and other high-quality datasets. Evaluation results reveal that LLMs fine-tuned on CleanVul not only exhibit enhanced accuracy but also superior generalization capabilities compared to those trained on uncleaned datasets. Specifically, models trained on CleanVul and tested on PrimeVul achieve accuracy higher than those trained and tested exclusively on PrimeVul.