Latent Space Class Dispersion: Effective Test Data Quality Assessment for DNNs

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep neural network (DNN) test suite quality assessment relies heavily on expensive mutation testing. To address this, we propose Latent-Space Class-wise Dispersion (LSCD), a novel test adequacy metric that quantifies the intra-class dispersion of samples in the intermediate-layer feature space of DNNs. This is the first work to establish latent-space intra-class dispersion as a core principle for test quality evaluation. Experiments across three major image classification benchmarks demonstrate that LSCD exhibits strong correlation with mutation scores (Spearman’s ρ = 0.87), significantly outperforming existing distance-based coverage metrics. Furthermore, LSCD achieves high statistical significance (p < 0.001) in evaluating 129 pre-trained mutants. Crucially, LSCD requires no model fine-tuning or auxiliary training, enabling low-cost, model-agnostic, and fully automated test quality assessment.

Technology Category

Application Category

📝 Abstract
High-quality test datasets are crucial for assessing the reliability of Deep Neural Networks (DNNs). Mutation testing evaluates test dataset quality based on their ability to uncover injected faults in DNNs as measured by mutation score (MS). At the same time, its high computational cost motivates researchers to seek alternative test adequacy criteria. We propose Latent Space Class Dispersion (LSCD), a novel metric to quantify the quality of test datasets for DNNs. It measures the degree of dispersion within a test dataset as observed in the latent space of a DNN. Our empirical study shows that LSCD reveals and quantifies deficiencies in the test dataset of three popular benchmarks pertaining to image classification tasks using DNNs. Corner cases generated using automated fuzzing were found to help enhance fault detection and improve the overall quality of the original test sets calculated by MS and LSCD. Our experiments revealed a high positive correlation (0.87) between LSCD and MS, significantly higher than the one achieved by the well-studied Distance-based Surprise Coverage (0.25). These results were obtained from 129 mutants generated through pre-training mutation operators, with statistical significance and a high validity of corner cases. These observations suggest that LSCD can serve as a cost-effective alternative to expensive mutation testing, eliminating the need to generate mutant models while offering comparably valuable insights into test dataset quality for DNNs.
Problem

Research questions and friction points this paper is trying to address.

Assessing test dataset quality for DNNs efficiently
Reducing computational cost of mutation testing alternatives
Measuring latent space dispersion to quantify dataset deficiencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LSCD measures test dataset dispersion in latent space
LSCD correlates highly with mutation score (0.87)
LSCD is cost-effective alternative to mutation testing
🔎 Similar Papers
No similar papers found.