🤖 AI Summary
Numerical instability in floating-point computations—manifesting as silent accuracy degradation rather than explicit NaN/INF failures—poses a critical yet under-detected challenge in machine learning. Conventional detection methods rely on crash signals, rendering them ineffective against such latent issues. This paper introduces Soft Assertion (SA), a novel framework that models numerical stability verification as a sensitivity-prediction task under input perturbations. SA trains a lightweight supervised model on unit-test data and synergistically applies directed input mutation to automatically identify input regions prone to numerical errors. Crucially, SA eliminates dependence on explicit failure signals and establishes the first learnable paradigm for numerical stability verification. Evaluated on the GRIST benchmark comprising 79 programs, SA achieves 100% detection of known defects. Furthermore, it uncovers previously unknown numerical bugs in 13 real-world ML projects—including a confirmed case in a clinical tumor detection model where SA exposed erroneous predictions due to floating-point instability.
📝 Abstract
Machine learning (ML) applications have become an integral part of our lives. ML applications extensively use floating-point computation and involve very large/small numbers; thus, maintaining the numerical stability of such complex computations remains an important challenge. Numerical bugs can lead to system crashes, incorrect output, and wasted computing resources. In this paper, we introduce a novel idea, namely soft assertions (SA), to encode safety/error conditions for the places where numerical instability can occur. A soft assertion is an ML model automatically trained using the dataset obtained during unit testing of unstable functions. Given the values at the unstable function in an ML application, a soft assertion reports how to change these values in order to trigger the instability. We then use the output of soft assertions as signals to effectively mutate inputs to trigger numerical instability in ML applications. In the evaluation, we used the GRIST benchmark, a total of 79 programs, as well as 15 real-world ML applications from GitHub. We compared our tool with 5 state-of-the-art (SOTA) fuzzers. We found all the GRIST bugs and outperformed the baselines. We found 13 numerical bugs in real-world code, one of which had already been confirmed by the GitHub developers. While the baselines mostly found the bugs that report NaN and INF, our tool ool found numerical bugs with incorrect output. We showed one case where the Tumor Detection Model, trained on Brain MRI images, should have predicted"tumor", but instead, it incorrectly predicted"no tumor"due to the numerical bugs. Our replication package is located at https://figshare.com/s/6528d21ccd28bea94c32.