🤖 AI Summary
Static vulnerability detection suffers from high false-positive rates, while dynamic fuzzing relies heavily on manually written drivers and is constrained by limited target function selection. To address these challenges, this paper proposes an end-to-end automated fuzzing framework: a pre-trained deep learning model for vulnerability classification serves as a target oracle to automatically identify high-risk functions; program analysis extracts function-level features to synthesize corresponding fuzz drivers; and AFL/libFuzzer performs dynamic validation. This work is the first to leverage ML-based vulnerability detectors for fuzz target selection, establishing a closed-loop synergy between static prediction and dynamic verification—thereby significantly reducing manual effort and false-positive interference. Experiments on the libgd library demonstrate substantial improvements in target function selection accuracy and vulnerability confirmation efficiency. The framework provides a scalable, automation-friendly methodology for large-scale vulnerability discovery.
📝 Abstract
In vulnerability detection, machine learning has been used as an effective static analysis technique, although it suffers from a significant rate of false positives. Contextually, in vulnerability discovery, fuzzing has been used as an effective dynamic analysis technique, although it requires manually writing fuzz drivers. Fuzz drivers usually target a limited subset of functions in a library that must be chosen according to certain criteria, e.g., the depth of a function, the number of paths. These criteria are verified by components called target oracles. In this work, we propose an automated fuzz driver generation workflow composed of: (1) identifying a likely vulnerable function by leveraging a machine learning for vulnerability detection model as a target oracle, (2) automatically generating fuzz drivers, (3) fuzzing the target function to find bugs which could confirm the vulnerability inferred by the target oracle. We show our method on an existing vulnerability in libgd, with a plan for large-scale evaluation.