A Large-Scale Collection Of (Non-)Actionable Static Code Analysis Reports

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static code analysis (SCA) tools frequently generate overwhelming volumes of non-actionable warnings, leading to “warning fatigue” and reduced developer responsiveness and code quality. A key bottleneck in addressing this issue is the scarcity of high-quality, fine-grained, manually annotated Java static warning datasets—particularly those labeled for actionability. To bridge this gap, we propose the first systematic methodology for collecting and classifying static warnings based on their actionability (actionable vs. non-actionable). Leveraging integrated static analysis, automated deduplication, and rigorous human annotation, we design an end-to-end pipeline ensuring both scale and labeling consistency. The resulting dataset, NASCAR, is the first large-scale, publicly available Java static analysis benchmark (>1M records) explicitly annotated for actionability. NASCAR fills a critical void in the Java ecosystem, serving as a foundational resource to advance SCA tool precision, mitigate warning fatigue, and enable robust warning prioritization research.

Technology Category

Application Category

📝 Abstract
Static Code Analysis (SCA) tools, while invaluable for identifying potential coding problems, functional bugs, or vulnerabilities, often generate an overwhelming number of warnings, many of which are non-actionable. This overload of alerts leads to ``alert fatigue'', a phenomenon where developers become desensitized to warnings, potentially overlooking critical issues and ultimately hindering productivity and code quality. Analyzing these warnings and training machine learning models to identify and filter them requires substantial datasets, which are currently scarce, particularly for Java. This scarcity impedes efforts to improve the accuracy and usability of SCA tools and mitigate the effects of alert fatigue. In this paper, we address this gap by introducing a novel methodology for collecting and categorizing SCA warnings, effectively distinguishing actionable from non-actionable ones. We further leverage this methodology to generate a large-scale dataset of over 1 million entries of Java source code warnings, named NASCAR: (Non-)Actionable Static Code Analysis Reports. To facilitate follow-up research in this domain, we make both the dataset and the tools used to generate it publicly available.
Problem

Research questions and friction points this paper is trying to address.

Static code analysis tools generate overwhelming non-actionable warnings
Alert fatigue causes developers to overlook critical code issues
Current scarcity of datasets impedes machine learning improvements for SCA tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Methodology for collecting and categorizing SCA warnings
Dataset with over 1 million Java SCA warning entries
Publicly available tools and dataset for follow-up research
🔎 Similar Papers
No similar papers found.