π€ AI Summary
To address the low test efficiency of fine-tuned deep neural networks (DNNs) under label budget constraints in distribution shift scenarios, this paper proposes MetaSelβa novel unsupervised test sample selection method that leverages behavioral discrepancies between fine-tuned and pre-trained models. MetaSel estimates meta-level misclassification probability via dual-model output divergence, incorporates a cross-distribution consistency assumption, and introduces a lightweight priority scoring mechanism to precisely identify sensitive input subspaces where fine-tuning induces decision boundary shifts. Extensive experiments across 68 fine-tuned models and three distribution shift settings demonstrate that MetaSel achieves an average Test Relative Coverage improvement of 28.46%β56.18% over ten state-of-the-art baselines. It further exhibits high stability and minimal labeling cost, establishing a new trade-off frontier between coverage efficacy and annotation efficiency.
π Abstract
Deep Neural Networks (DNNs) face challenges during deployment due to data distribution shifts. Fine-tuning adapts pre-trained models to new contexts requiring smaller labeled sets. However, testing fine-tuned models under constrained labeling budgets remains a critical challenge. This paper introduces MetaSel, a new approach, tailored for fine-tuned DNN models, to select tests from unlabeled inputs. MetaSel assumes that fine-tuned and pre-trained models share related data distributions and exhibit similar behaviors for many inputs. However, their behaviors diverge within the input subspace where fine-tuning alters decision boundaries, making those inputs more prone to misclassification. Unlike general approaches that rely solely on the DNN model and its input set, MetaSel leverages information from both the fine-tuned and pre-trained models and their behavioral differences to estimate misclassification probability for unlabeled test inputs, enabling more effective test selection. Our extensive empirical evaluation, comparing MetaSel against 10 state-of-the-art approaches and involving 68 fine-tuned models across weak, medium, and strong distribution shifts, demonstrates that MetaSel consistently delivers significant improvements in Test Relative Coverage (TRC) over existing baselines, particularly under highly constrained labeling budgets. MetaSel shows average TRC improvements of 28.46% to 56.18% over the most frequent second-best baselines while maintaining a high TRC median and low variability. Our results confirm MetaSel's practicality, robustness, and cost-effectiveness for test selection in the context of fine-tuned models.