🤖 AI Summary
Static code analysis (SCA) tools frequently generate overwhelming volumes of non-actionable warnings, leading to “warning fatigue” and reduced developer responsiveness and code quality. A key bottleneck in addressing this issue is the scarcity of high-quality, fine-grained, manually annotated Java static warning datasets—particularly those labeled for actionability. To bridge this gap, we propose the first systematic methodology for collecting and classifying static warnings based on their actionability (actionable vs. non-actionable). Leveraging integrated static analysis, automated deduplication, and rigorous human annotation, we design an end-to-end pipeline ensuring both scale and labeling consistency. The resulting dataset, NASCAR, is the first large-scale, publicly available Java static analysis benchmark (>1M records) explicitly annotated for actionability. NASCAR fills a critical void in the Java ecosystem, serving as a foundational resource to advance SCA tool precision, mitigate warning fatigue, and enable robust warning prioritization research.
📝 Abstract
Static Code Analysis (SCA) tools, while invaluable for identifying potential coding problems, functional bugs, or vulnerabilities, often generate an overwhelming number of warnings, many of which are non-actionable. This overload of alerts leads to ``alert fatigue'', a phenomenon where developers become desensitized to warnings, potentially overlooking critical issues and ultimately hindering productivity and code quality. Analyzing these warnings and training machine learning models to identify and filter them requires substantial datasets, which are currently scarce, particularly for Java. This scarcity impedes efforts to improve the accuracy and usability of SCA tools and mitigate the effects of alert fatigue. In this paper, we address this gap by introducing a novel methodology for collecting and categorizing SCA warnings, effectively distinguishing actionable from non-actionable ones. We further leverage this methodology to generate a large-scale dataset of over 1 million entries of Java source code warnings, named NASCAR: (Non-)Actionable Static Code Analysis Reports. To facilitate follow-up research in this domain, we make both the dataset and the tools used to generate it publicly available.