🤖 AI Summary
This paper investigates the impact of missing data on Shapley-value-based interpretability of machine learning models. We conduct theoretical analysis and extensive empirical evaluation across multiple imputation strategies—including mean imputation, KNN imputation, MICE, and XGBoost’s native missing-value handling—to characterize how missingness systematically biases Shapley value estimation. Contrary to common assumptions, we find no positive correlation between prediction error (MSE) and explanation error (Shapley MSE); notably, while XGBoost’s native handling improves predictive accuracy, it severely degrades fidelity in feature importance and interaction attribution. Moreover, imputation choice significantly affects Shapley value stability and consistency. Based on these findings, we propose a principled guideline for selecting imputation methods tailored to interpretability objectives. Our key contribution is establishing the first formal theoretical link between missing data mechanisms and Shapley value bias—challenging the implicit “accurate prediction implies faithful explanation” assumption—and providing methodological foundations for trustworthy AI.
📝 Abstract
Missing data is a prevalent issue that can significantly impair model performance and interpretability. This paper briefly summarizes the development of the field of missing data with respect to Explainable Artificial Intelligence and experimentally investigates the effects of various imputation methods on the calculation of Shapley values, a popular technique for interpreting complex machine learning models. We compare different imputation strategies and assess their impact on feature importance and interaction as determined by Shapley values. Moreover, we also theoretically analyze the effects of missing values on Shapley values. Importantly, our findings reveal that the choice of imputation method can introduce biases that could lead to changes in the Shapley values, thereby affecting the interpretability of the model. Moreover, and that a lower test prediction mean square error (MSE) may not imply a lower MSE in Shapley values and vice versa. Also, while Xgboost is a method that could handle missing data directly, using Xgboost directly on missing data can seriously affect interpretability compared to imputing the data before training Xgboost. This study provides a comprehensive evaluation of imputation methods in the context of model interpretation, offering practical guidance for selecting appropriate techniques based on dataset characteristics and analysis objectives. The results underscore the importance of considering imputation effects to ensure robust and reliable insights from machine learning models.