🤖 AI Summary
Existing counterfactual explanation methods rely on predefined mutable feature sets, limiting adaptability to users’ heterogeneous real-world constraints.
Method: This paper proposes a dynamic counterfactual explanation framework grounded in *counterfactual templates*, enabling users to specify mutable features interactively at inference time and generate actionable, personalized recommendations aligned with their constraints.
Contribution/Results: (1) We introduce the first training-free, zero-optimization-overhead counterfactual template mechanism; (2) We pioneer deep integration of generative modeling—specifically FCEGAN—with black-box models, leveraging historical prediction data distillation to ensure constraint alignment. Evaluated on economic and healthcare datasets, our method improves counterfactual validity by 32.7% on average, supports real-time interactive explanation, and requires no access to model internals or retraining.
📝 Abstract
Counterfactual explanations provide actionable insights to achieve desired outcomes by suggesting minimal changes to input features. However, existing methods rely on fixed sets of mutable features, which makes counterfactual explanations inflexible for users with heterogeneous real-world constraints. Here, we introduce Flexible Counterfactual Explanations, a framework incorporating counterfactual templates, which allows users to dynamically specify mutable features at inference time. In our implementation, we use Generative Adversarial Networks (FCEGAN), which align explanations with user-defined constraints without requiring model retraining or additional optimization. Furthermore, FCEGAN is designed for black-box scenarios, leveraging historical prediction datasets to generate explanations without direct access to model internals. Experiments across economic and healthcare datasets demonstrate that FCEGAN significantly improves counterfactual explanations' validity compared to traditional benchmark methods. By integrating user-driven flexibility and black-box compatibility, counterfactual templates support personalized explanations tailored to user constraints.