🤖 AI Summary
Existing approaches (e.g., VulFixMiner) analyze code changes in isolation, neglecting contextual associations among issue reports, pull requests, and historical vulnerability patterns—thus failing to detect fine-grained vulnerability fixes embedded within routine updates.
Method: We propose the first ternary contextual fusion framework that jointly models code change intent, semantics of heterogeneous development artifacts, and historical vulnerability repair patterns. Leveraging an enhanced large language model (LLM), chain-of-thought (CoT) reasoning, and in-context learning (ICL), it enables interpretable and precise vulnerability-fix identification.
Contribution/Results: Our framework introduces two key innovations: (1) artifact semantic alignment and (2) similar-vulnerability retrieval augmentation, enabling generation of security-expert–interpretable reasoning justifications. On the BigVulFixes benchmark, it achieves 68.1%–145.4% F1-score improvement over baselines. User studies confirm that its explanations significantly enhance expert efficiency in vulnerability-fix identification.
📝 Abstract
Detecting vulnerability fix commits in open-source software is crucial for maintaining software security. To help OSS identify vulnerability fix commits, several automated approaches are developed. However, existing approaches like VulFixMiner and CoLeFunDa, focus solely on code changes, neglecting essential context from development artifacts. Tools like Vulcurator, which integrates issue reports, fail to leverage semantic associations between different development artifacts (e.g., pull requests and history vulnerability fixes). Moreover, they miss vulnerability fixes in tangled commits and lack explanations, limiting practical use. Hence to address those limitations, we propose LLM4VFD, a novel framework that leverages Large Language Models (LLMs) enhanced with Chain-of-Thought reasoning and In-Context Learning to improve the accuracy of vulnerability fix detection. LLM4VFD comprises three components: (1) Code Change Intention, which analyzes commit summaries, purposes, and implications using Chain-of-Thought reasoning; (2) Development Artifact, which incorporates context from related issue reports and pull requests; (3) Historical Vulnerability, which retrieves similar past vulnerability fixes to enrich context. More importantly, on top of the prediction, LLM4VFD also provides a detailed analysis and explanation to help security experts understand the rationale behind the decision. We evaluated LLM4VFD against state-of-the-art techniques, including Pre-trained Language Model-based approaches and vanilla LLMs, using a newly collected dataset, BigVulFixes. Experimental results demonstrate that LLM4VFD significantly outperforms the best-performed existing approach by 68.1%--145.4%. Furthermore, We conducted a user study with security experts, showing that the analysis generated by LLM4VFD improves the efficiency of vulnerability fix identification.