🤖 AI Summary
Accurately estimating model test error under scarce labeled data remains challenging. Method: This paper proposes a novel error estimation paradigm leveraging high-quality synthetic data. Theoretically, we derive a new generalization error upper bound incorporating generator quality constraints, quantifying for the first time how generative model fidelity critically affects estimation bias. Methodologically, we design an interpretable and optimization-friendly synthetic sample construction strategy that jointly leverages generative modeling and generalization theory to enhance assessment reliability. Results: Extensive experiments on both synthetic and real-world tabular datasets demonstrate that our approach consistently outperforms existing baselines, achieving significant and robust improvements in both accuracy and stability of error estimation.
📝 Abstract
Accurately evaluating model performance is crucial for deploying machine learning systems in real-world applications. Traditional methods often require a sufficiently large labeled test set to ensure a reliable evaluation. However, in many contexts, a large labeled dataset is costly and labor-intensive. Therefore, we sometimes have to do evaluation by a few labeled samples, which is theoretically challenging. Recent advances in generative models offer a promising alternative by enabling the synthesis of high-quality data. In this work, we make a systematic investigation about the use of synthetic data to estimate the test error of a trained model under limited labeled data conditions. To this end, we develop novel generalization bounds that take synthetic data into account. Those bounds suggest novel ways to optimize synthetic samples for evaluation and theoretically reveal the significant role of the generator's quality. Inspired by those bounds, we propose a theoretically grounded method to generate optimized synthetic data for model evaluation. Experimental results on simulation and tabular datasets demonstrate that, compared to existing baselines, our method achieves accurate and more reliable estimates of the test error.