๐ค AI Summary
Existing data valuation methods suffer from high computational overhead, weak fairness guarantees, and poor interpretability, limiting their applicability in large-scale, high-stakes settings. This paper proposes a neural dynamical data valuation framework grounded in optimal control theoryโformulating data valuation as a continuous-time optimal control problem for the first time. Value evolution is parameterized via neural differential equations, eliminating repeated utility function retraining and enabling efficient, accurate valuation of all samples in a single training pass. We innovatively introduce a sensitivity-driven mean-field interaction mechanism to achieve fairness-aware reweighting, overcoming scalability bottlenecks inherent in marginal-contribution-based approaches. Extensive experiments across multiple tasks and datasets demonstrate substantial improvements over state-of-the-art methods: 12โ28% higher accuracy in distinguishing high- versus low-value samples and over 90% reduction in computational cost.
๐ Abstract
Data constitute the foundational component of the data economy and its marketplaces. Efficient and fair data valuation has emerged as a topic of significant interest. Many approaches based on marginal contribution have shown promising results in various downstream tasks. However, they are well known to be computationally expensive as they require training a large number of utility functions, which are used to evaluate the usefulness or value of a given dataset for a specific purpose. As a result, it has been recognized as infeasible to apply these methods to a data marketplace involving large-scale datasets. Consequently, a critical issue arises: how can the re-training of the utility function be avoided? To address this issue, we propose a novel data valuation method from the perspective of optimal control, named the neural dynamic data valuation (NDDV). Our method has solid theoretical interpretations to accurately identify the data valuation via the sensitivity of the data optimal control state. In addition, we implement a data re-weighting strategy to capture the unique features of data points, ensuring fairness through the interaction between data points and the mean-field states. Notably, our method requires only training once to estimate the value of all data points, significantly improving the computational efficiency. We conduct comprehensive experiments using different datasets and tasks. The results demonstrate that the proposed NDDV method outperforms the existing state-of-the-art data valuation methods in accurately identifying data points with either high or low values and is more computationally efficient.