Explainability of Machine Learning Models under Missing Data

📅 2024-06-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the impact of missing data on Shapley-value-based interpretability of machine learning models. We conduct theoretical analysis and extensive empirical evaluation across multiple imputation strategies—including mean imputation, KNN imputation, MICE, and XGBoost’s native missing-value handling—to characterize how missingness systematically biases Shapley value estimation. Contrary to common assumptions, we find no positive correlation between prediction error (MSE) and explanation error (Shapley MSE); notably, while XGBoost’s native handling improves predictive accuracy, it severely degrades fidelity in feature importance and interaction attribution. Moreover, imputation choice significantly affects Shapley value stability and consistency. Based on these findings, we propose a principled guideline for selecting imputation methods tailored to interpretability objectives. Our key contribution is establishing the first formal theoretical link between missing data mechanisms and Shapley value bias—challenging the implicit “accurate prediction implies faithful explanation” assumption—and providing methodological foundations for trustworthy AI.

Technology Category

Application Category

📝 Abstract
Missing data is a prevalent issue that can significantly impair model performance and interpretability. This paper briefly summarizes the development of the field of missing data with respect to Explainable Artificial Intelligence and experimentally investigates the effects of various imputation methods on the calculation of Shapley values, a popular technique for interpreting complex machine learning models. We compare different imputation strategies and assess their impact on feature importance and interaction as determined by Shapley values. Moreover, we also theoretically analyze the effects of missing values on Shapley values. Importantly, our findings reveal that the choice of imputation method can introduce biases that could lead to changes in the Shapley values, thereby affecting the interpretability of the model. Moreover, and that a lower test prediction mean square error (MSE) may not imply a lower MSE in Shapley values and vice versa. Also, while Xgboost is a method that could handle missing data directly, using Xgboost directly on missing data can seriously affect interpretability compared to imputing the data before training Xgboost. This study provides a comprehensive evaluation of imputation methods in the context of model interpretation, offering practical guidance for selecting appropriate techniques based on dataset characteristics and analysis objectives. The results underscore the importance of considering imputation effects to ensure robust and reliable insights from machine learning models.
Problem

Research questions and friction points this paper is trying to address.

Missing Data
Machine Learning Interpretability
Shapley Values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Missing Data Imputation
Shapley Value Bias
Xgboost with Missing Values
🔎 Similar Papers
No similar papers found.
T
Tuan L. Vo
LTCI, Télém Paris, Institut Polytechnique de Paris, Paris, France
T
Thu Nguyen
SimulaMet, Oslo, Norway
H
Hugo Hammer
Oslo Metropolitan University, Oslo, Norway; SimulaMet, Oslo, Norway
M
M. Riegler
Oslo Metropolitan University, Oslo, Norway; SimulaMet, Oslo, Norway
Pål Halvorsen
Pål Halvorsen
SimulaMet, Simula Research Laboratory, Oslo Metropolitan University (OsloMet), University of Oslo
Multimedia systemsMedical Multimedia SystemsSport SystemsApplied Machine Learning