🤖 AI Summary
Existing deep neural network (DNN) test suite quality assessment relies heavily on expensive mutation testing. To address this, we propose Latent-Space Class-wise Dispersion (LSCD), a novel test adequacy metric that quantifies the intra-class dispersion of samples in the intermediate-layer feature space of DNNs. This is the first work to establish latent-space intra-class dispersion as a core principle for test quality evaluation. Experiments across three major image classification benchmarks demonstrate that LSCD exhibits strong correlation with mutation scores (Spearman’s ρ = 0.87), significantly outperforming existing distance-based coverage metrics. Furthermore, LSCD achieves high statistical significance (p < 0.001) in evaluating 129 pre-trained mutants. Crucially, LSCD requires no model fine-tuning or auxiliary training, enabling low-cost, model-agnostic, and fully automated test quality assessment.
📝 Abstract
High-quality test datasets are crucial for assessing the reliability of Deep Neural Networks (DNNs). Mutation testing evaluates test dataset quality based on their ability to uncover injected faults in DNNs as measured by mutation score (MS). At the same time, its high computational cost motivates researchers to seek alternative test adequacy criteria. We propose Latent Space Class Dispersion (LSCD), a novel metric to quantify the quality of test datasets for DNNs. It measures the degree of dispersion within a test dataset as observed in the latent space of a DNN. Our empirical study shows that LSCD reveals and quantifies deficiencies in the test dataset of three popular benchmarks pertaining to image classification tasks using DNNs. Corner cases generated using automated fuzzing were found to help enhance fault detection and improve the overall quality of the original test sets calculated by MS and LSCD. Our experiments revealed a high positive correlation (0.87) between LSCD and MS, significantly higher than the one achieved by the well-studied Distance-based Surprise Coverage (0.25). These results were obtained from 129 mutants generated through pre-training mutation operators, with statistical significance and a high validity of corner cases. These observations suggest that LSCD can serve as a cost-effective alternative to expensive mutation testing, eliminating the need to generate mutant models while offering comparably valuable insights into test dataset quality for DNNs.