Evaluating Counterfactual Explanation Methods on Incomplete Inputs

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of generating counterfactual explanations from real-world inputs that often contain missing values—a scenario underexplored in existing literature. The work presents the first systematic investigation into how input incompleteness affects counterfactual generation, evaluating multiple state-of-the-art algorithms under simulated missingness and comparing the performance of robust versus non-robust approaches. Results reveal that while robust methods exhibit marginally better validity, all current techniques struggle to consistently produce counterfactuals that are both valid and plausible. These findings underscore a critical limitation in the current state of counterfactual explanation methods and highlight the urgent need for novel approaches specifically designed to handle incomplete input data.
📝 Abstract
Existing algorithms for generating Counterfactual Explanations (CXs) for Machine Learning (ML) typically assume fully specified inputs. However, real-world data often contains missing values, and the impact of these incomplete inputs on the performance of existing CX methods remains unexplored. To address this gap, we systematically evaluate recent CX generation methods on their ability to provide valid and plausible counterfactuals when inputs are incomplete. As part of this investigation, we hypothesize that robust CX generation methods will be better suited to address the challenge of providing valid and plausible counterfactuals when inputs are incomplete. Our findings reveal that while robust CX methods achieve higher validity than non-robust ones, all methods struggle to find valid counterfactuals. These results motivate the need for new CX methods capable of handling incomplete inputs.
Problem

Research questions and friction points this paper is trying to address.

Counterfactual Explanations
Incomplete Inputs
Missing Values
Machine Learning
Explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual Explanations
Incomplete Inputs
Missing Values
Robustness
Explainable AI
🔎 Similar Papers
No similar papers found.
Francesco Leofante
Francesco Leofante
Imperial College London
Artificial Intelligence
Daniel Neider
Daniel Neider
TU Dortmund University and Center for Trustworthy Data Science and Security
Formal MethodsMachine LearningLogicArtificial Intelligence
M
Mustafa Yalçıner
TU Dortmund University, Dortmund, Germany Center for Trustworthy Data Science and Security, University Alliance Ruhr, Dortmund, Germany