🤖 AI Summary
Regulated domains such as finance urgently require model-agnostic yet causally grounded explanation methods that ensure both interpretability and causal validity. To address this, we propose a model-aware counterfactual explanation framework specifically designed for random forests. Leveraging the intrinsic tree structure and node-splitting mechanism, our method efficiently searches the model’s representation space for minimal-intervention counterfactual instances while simultaneously quantifying each feature’s causal contribution. By integrating counterfactual reasoning, similarity-aware learning, and path-based analysis, it generates sparse, semantically aligned, and actionable explanations. Experiments on MNIST and the German Credit dataset demonstrate that, compared to model-agnostic approaches like Shapley values, our method yields more concise, operationally meaningful explanations—enhancing user comprehension and trust in model decisions without sacrificing fidelity.
📝 Abstract
Despite their enormous predictive power, machine learning models are often unsuitable for applications in regulated industries such as finance, due to their limited capacity to provide explanations. While model-agnostic frameworks such as Shapley values have proved to be convenient and popular, they rarely align with the kinds of causal explanations that are typically sought after. Counterfactual case-based explanations, where an individual is informed of which circumstances would need to be different to cause a change in outcome, may be more intuitive and actionable. However, finding appropriate counterfactual cases is an open challenge, as is interpreting which features are most critical for the change in outcome. Here, we pose the question of counterfactual search and interpretation in terms of similarity learning, exploiting the representation learned by the random forest predictive model itself. Once a counterfactual is found, the feature importance of the explanation is computed as a function of which random forest partitions are crossed in order to reach it from the original instance. We demonstrate this method on both the MNIST hand-drawn digit dataset and the German credit dataset, finding that it generates explanations that are sparser and more useful than Shapley values.