🤖 AI Summary
This work reveals that type checking significantly biases neural program bug detection in dynamic languages (e.g., Python): over 40% of variable misuse errors in mainstream benchmarks are type-related and easily caught by static type checkers (e.g., mypy), causing neural models (e.g., CodeBERT, GraphCodeBERT) to overfit to trivial type errors and underperform on deeper logical defects. To address this, we conduct the first systematic analysis of how type checking distorts both training and evaluation of neural detectors; propose a synergistic framework integrating static type checking with neural analysis; and design a type-checkability-aware filtering method for dataset construction. Experiments show that incorporating type checking improves detection accuracy by up to 18.7%; moreover, removing type-related errors from training data boosts average F1-score on non-type errors by 12.3%, markedly enhancing model capability to identify subtle logical flaws.
📝 Abstract
Motivation: Automated bug detection in dynamically typed languages such as Python is essential for maintaining code quality. The lack of mandatory type annotations in such languages can lead to errors that are challenging to identify early with traditional static analysis tools. Recent progress in deep neural networks has led to increased use of neural bug detectors. In statically typed languages, a type checker is integrated into the compiler and thus taken into consideration when the neural bug detector is designed for these languages. Problem: However, prior studies overlook this aspect during the training and testing of neural bug detectors for dynamically typed languages. When an optional type checker is used, assessing existing neural bug detectors on bugs easily detectable by type checkers may impact their performance estimation. Moreover, including these bugs in the training set of neural bug detectors can shift their detection focus toward the wrong type of bugs. Contribution: We explore the impact of type checking on various neural bug detectors for variable misuse bugs, a common type targeted by neural bug detectors. Existing synthetic and real-world datasets are type-checked to evaluate the prevalence of type-related bugs. Then, we investigate how type-related bugs influence the training and testing of the neural bug detectors. Findings: Our findings indicate that existing bug detection datasets contain a significant proportion of type-related bugs. Building on this insight, we discover integrating the neural bug detector with a type checker can be beneficial, especially when the code is annotated with types. Further investigation reveals neural bug detectors perform better on type-related bugs than other bugs. Moreover, removing type-related bugs from the training data helps improve neural bug detectors' ability to identify bugs beyond the scope of type checkers.