π€ AI Summary
Traditional imputation methods estimate only the conditional mean of missing values, failing to characterize predictive uncertainty. To address this, we propose kNNSamplerβa k-nearest-neighbor-based stochastic multiple imputation method that consistently estimates the full conditional distribution of missing responses given observed covariates. Its core innovation lies in non-deterministic sampling from the observed responses of the k most similar units, enabling unbiased recovery of the missing-value distribution and principled uncertainty quantification. We establish theoretical guarantees: under mild regularity conditions, kNNSampler achieves asymptotic consistency in estimating the conditional distribution. Empirical evaluations across diverse missingness mechanisms (MCAR, MAR, MNAR) and data types demonstrate substantial improvements over state-of-the-art mean-based and model-driven imputation approaches. An open-source implementation ensures full reproducibility.
π Abstract
We study a missing-value imputation method, termed kNNSampler, that imputes a given unit's missing response by randomly sampling from the observed responses of the $k$ most similar units to the given unit in terms of the observed covariates. This method can sample unknown missing values from their distributions, quantify the uncertainties of missing values, and be readily used for multiple imputation. Unlike popular kNNImputer, which estimates the conditional mean of a missing response given an observed covariate, kNNSampler is theoretically shown to estimate the conditional distribution of a missing response given an observed covariate. Experiments demonstrate its effectiveness in recovering the distribution of missing values. The code for kNNSampler is made publicly available (https://github.com/SAP/knn-sampler).