🤖 AI Summary
To address the high computational cost of repeated model retraining in data valuation, this paper proposes a retraining-free, efficient Shapley value estimation method. The core idea is to employ Gaussian Process Regression (GPR) to directly predict the utility (e.g., validation accuracy) of arbitrary data subsets, thereby bypassing exhaustive model training over exponentially many subsets. A novel GPR kernel is introduced, based on the sliced Wasserstein distance, which simultaneously ensures positive definiteness and captures semantic similarity between data distributions—enabling prior-informed utility prediction. Extensive experiments across multiple models, datasets, and utility functions demonstrate that the method significantly reduces prediction error, accelerates data valuation by orders of magnitude, and maintains high fidelity in Shapley value estimation compared to conventional retraining-based approaches.
📝 Abstract
Data valuation is increasingly used in machine learning (ML) to decide the fair compensation for data owners and identify valuable or harmful data for improving ML models. Cooperative game theory-based data valuation, such as Data Shapley, requires evaluating the data utility (e.g., validation accuracy) and retraining the ML model for multiple data subsets. While most existing works on efficient estimation of the Shapley values have focused on reducing the number of subsets to evaluate, our framework, exttt{DUPRE}, takes an alternative yet complementary approach that reduces the cost per subset evaluation by predicting data utilities instead of evaluating them by model retraining. Specifically, given the evaluated data utilities of some data subsets, exttt{DUPRE} fits a emph{Gaussian process} (GP) regression model to predict the utility of every other data subset. Our key contribution lies in the design of our GP kernel based on the sliced Wasserstein distance between empirical data distributions. In particular, we show that the kernel is valid and positive semi-definite, encodes prior knowledge of similarities between different data subsets, and can be efficiently computed. We empirically verify that exttt{DUPRE} introduces low prediction error and speeds up data valuation for various ML models, datasets, and utility functions.