Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying the adversarial robustness of Graph Neural Networks (GNNs) against structural perturbations remains computationally intractable due to the high cost of exact constraint solving. Method: This paper proposes an efficient, SAT-driven lightweight verification framework. Its core innovation is a polynomial-time partial solver that replaces expensive complete constraint solvers, while preserving verification accuracy. The method models structural attacks as logical constraints and leverages the piecewise-linear nature of GNN forward propagation to generate compact encodings, supporting diverse GNN architectures and real-world graph datasets. Contribution/Results: Experiments demonstrate that our approach maintains over 95% detection accuracy while accelerating verification by two to three orders of magnitude on average. It exhibits strong scalability and, for the first time, enables practical, large-scale verification of structural robustness for GNNs on real-world graphs.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, and important issue is the detection of adversarial attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether such an attack exists - were originally developed for image classification, but there are variants for many other machine learning architectures. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques in the area proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.
Problem

Research questions and friction points this paper is trying to address.

Verifying robustness of graph neural networks against adversarial structural attacks
Detecting existence of adversarial attacks with small input perturbations
Improving structural robustness verification using efficient partial solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces powerful solvers with efficient partial solvers
Uses lightweight satisfiability testing for verification
Focuses on structural robustness in graph neural networks
🔎 Similar Papers
No similar papers found.