NatGVD: Natural Adversarial Example Attack towards Graph-based Vulnerability Detection

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural network (GNN)- and graph-aware Transformer-based vulnerability detectors exhibit insufficient robustness against adversarial attacks, particularly lacking methods that generate natural, semantically preserved adversarial vulnerable code. Method: We propose NatGVD—the first adversarial attack framework incorporating explicit naturalness constraints—leveraging semantic-preserving code structural transformations and joint optimization of graph topology to generate highly natural adversarial examples that evade both human inspection and automated tools, without injecting redundant code. Contribution/Results: Specifically designed for graph-based vulnerability detectors, NatGVD achieves up to 53.04% evasion rates across multiple state-of-the-art models, exposing their intrinsic fragility. Furthermore, we investigate effective defense strategies, providing both theoretical insights and practical pathways toward enhancing the robustness of graph-based vulnerability detection systems.

Technology Category

Application Category

📝 Abstract
Graph-based models learn rich code graph structural information and present superior performance on various code analysis tasks. However, the robustness of these models against adversarial example attacks in the context of vulnerability detection remains an open question. This paper proposes NatGVD, a novel attack methodology that generates natural adversarial vulnerable code to circumvent GNN-based and graph-aware transformer-based vulnerability detectors. NatGVD employs a set of code transformations that modify graph structure while preserving code semantics. Instead of injecting dead or unrelated code like previous works, NatGVD considers naturalness requirements: generated examples should not be easily recognized by humans or program analysis tools. With extensive evaluation of NatGVD on state-of-the-art vulnerability detection systems, the results reveal up to 53.04% evasion rate across GNN-based detectors and graph-aware transformer-based detectors. We also explore potential defense strategies to enhance the robustness of these systems against NatGVD.
Problem

Research questions and friction points this paper is trying to address.

NatGVD generates natural adversarial code against graph-based vulnerability detectors
It modifies graph structure while preserving code semantics and naturalness
The attack achieves high evasion rates on GNN and transformer detectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates natural adversarial vulnerable code examples
Employs semantics-preserving code transformation techniques
Focuses on evading graph-based vulnerability detection systems
🔎 Similar Papers
No similar papers found.
A
Avilash Rath
The University of Texas at Dallas
W
Weiliang Qi
The University of Texas at Dallas
Y
Youpeng Li
The University of Texas at Dallas
Xinda Wang
Xinda Wang
University of Texas at Dallas
Software SecurityAI SecuritySystems Security