LLM-based Vulnerable Code Augmentation: Generate or Refactor?

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Severe class imbalance among CWE categories in vulnerability datasets degrades deep learning classifiers’ performance—particularly for rare vulnerability types. Method: This paper proposes a dual-path data augmentation framework leveraging large language models (LLMs): (1) generating high-quality vulnerable code samples using Qwen2.5-Coder, and (2) semantically preserving code refactoring to rewrite existing samples. Unlike single-strategy approaches, our method synergistically integrates generation and refactoring within a unified augmentation pipeline, with CodeBERT as the downstream classifier, rigorously evaluated on the SVEN benchmark. Contribution/Results: Our lightweight, LLM-driven structured augmentation significantly improves F1-scores for rare CWE classes—achieving an average gain of +12.7%—demonstrating both effectiveness and feasibility for vulnerability data expansion. This work establishes a novel paradigm for low-resource vulnerability detection.

Technology Category

Application Category

📝 Abstract
Vulnerability code-bases often suffer from severe imbalance, limiting the effectiveness of Deep Learning-based vulnerability classifiers. Data Augmentation could help solve this by mitigating the scarcity of under-represented CWEs. In this context, we investigate LLM-based augmentation for vulnerable functions, comparing controlled generation of new vulnerable samples with semantics-preserving refactoring of existing ones. Using Qwen2.5-Coder to produce augmented data and CodeBERT as a vulnerability classifier on the SVEN dataset, we find that our approaches are indeed effective in enriching vulnerable code-bases through a simple process and with reasonable quality, and that a hybrid strategy best boosts vulnerability classifiers' performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses data imbalance in vulnerability code-bases for deep learning classifiers
Compares LLM-based generation versus refactoring to augment vulnerable functions
Evaluates effectiveness of augmentation methods using Qwen2.5-Coder and CodeBERT
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based vulnerable code generation and refactoring
Hybrid strategy boosts vulnerability classifier performance
Simple process enriches code-bases with reasonable quality
🔎 Similar Papers
No similar papers found.
D
Dyna Soumhane Ouchebara
University of Mons - Computer science department
Stéphane Dupont
Stéphane Dupont
University of Mons
Speech processingMultimodal interactionMultimediaMachine learning