π€ AI Summary
Static Application Security Testing (SAST) tools often suffer from high false positive rates, which undermines developer trust. This work proposes a novel false positive filtering approach based on Graph Convolutional Networks (GCNs), uniquely integrating GCNs with Code Property Graphs (CPGs) to effectively model both structural and semantic aspects of source code for distinguishing genuine vulnerabilities from false alarms. Evaluated on the CamBenchCAP dataset, the method achieves 100% test accuracy, and attains 96.6% accuracy on CryptoAPI-Bench. Notably, some instances labeled as misclassifications are in fact justified security warnings, reflecting the modelβs strong discriminative capability and its conservative, security-oriented design philosophy.
π Abstract
Static Application Security Testing (SAST) tools play a vital role in modern software development by automatically detecting potential vulnerabilities in source code. However, their effectiveness is often limited by a high rate of false positives, which wastes developer's effort and undermines trust in automated analysis. This work presents a Graph Convolutional Network (GCN) model designed to predict SAST reports as true and false positive. The model leverages Code Property Graphs (CPGs) constructed from static analysis results to capture both, structural and semantic relationships within code. Trained on the CamBenchCAP dataset, the model achieved an accuracy of 100% on the test set using an 80/20 train-test split. Evaluation on the CryptoAPI-Bench benchmark further demonstrated the model's practical applicability, reaching an overall accuracy of up to 96.6%. A detailed qualitative inspection revealed that many cases marked as misclassifications corresponded to genuine security weaknesses, indicating that the model effectively reflects conservative, security-aware reasoning. Identified limitations include incomplete control-flow representation due to missing interprocedural connections. Future work will focus on integrating call graphs, applying graph explainability techniques, and extending training data across multiple SAST tools to improve generalization and interpretability.