🤖 AI Summary
This study addresses the lack of systematic evaluation of static code analysis tools, particularly regarding their effectiveness in detecting exploitable vulnerabilities. Through a comprehensive literature review, it presents the first holistic mapping of 246 tools across dimensions including vulnerability types, application domains, underlying analysis techniques, and evaluation methodologies. The findings reveal that most tools cover only a limited set of weaknesses, often identifying vulnerabilities that are not practically exploitable. Furthermore, evaluations commonly rely on small-scale, ad hoc benchmarks, which undermines the reliability of reported results. By exposing critical gaps in both the coverage of exploitable vulnerabilities and the rigor of empirical assessment, this work provides an evidence-based foundation and clear direction for future research and tool development in static analysis.
📝 Abstract
Static security analysis is a widely used technique for detecting software vulnerabilities across a wide range of weaknesses, application domains, and programming languages. While prior work surveyed static analyzes for specific weaknesses or application domains, no overview of the entire security landscape exists. We present a systematic literature review of 246 static security analyzers concerning their targeted vulnerabilities, application domains, analysis techniques, evaluation methods, and limitations. We observe that most analyzers focus on a limited set of weaknesses, that the vulnerabilities they detect are rarely exploitable, and that evaluations use custom benchmarks that are too small to enable robust assessment.