🤖 AI Summary
Severe class imbalance among CWE categories in vulnerability datasets degrades deep learning classifiers’ performance—particularly for rare vulnerability types. Method: This paper proposes a dual-path data augmentation framework leveraging large language models (LLMs): (1) generating high-quality vulnerable code samples using Qwen2.5-Coder, and (2) semantically preserving code refactoring to rewrite existing samples. Unlike single-strategy approaches, our method synergistically integrates generation and refactoring within a unified augmentation pipeline, with CodeBERT as the downstream classifier, rigorously evaluated on the SVEN benchmark. Contribution/Results: Our lightweight, LLM-driven structured augmentation significantly improves F1-scores for rare CWE classes—achieving an average gain of +12.7%—demonstrating both effectiveness and feasibility for vulnerability data expansion. This work establishes a novel paradigm for low-resource vulnerability detection.
📝 Abstract
Vulnerability code-bases often suffer from severe imbalance, limiting the effectiveness of Deep Learning-based vulnerability classifiers. Data Augmentation could help solve this by mitigating the scarcity of under-represented CWEs. In this context, we investigate LLM-based augmentation for vulnerable functions, comparing controlled generation of new vulnerable samples with semantics-preserving refactoring of existing ones. Using Qwen2.5-Coder to produce augmented data and CodeBERT as a vulnerability classifier on the SVEN dataset, we find that our approaches are indeed effective in enriching vulnerable code-bases through a simple process and with reasonable quality, and that a hybrid strategy best boosts vulnerability classifiers' performance.