🤖 AI Summary
This paper addresses contrastive explanation queries of the form “Why P rather than Q?” by proposing the first unified logical framework based on propositional logic to explicitly model and compare the causal antecedents of P and Q. Methodologically, it introduces a minimal-cardinality semantics for contrastive explanations, provides a decidable logical characterization, and implements an efficient ASP-based solver for CNF formulas. Theoretically, it establishes the computational complexity of multiple contrastive explanation variants—e.g., proving Σ₂^P-completeness—and delineates their formal boundaries. Practically, it develops a scalable prototype system empirically validated on diverse benchmark instances, demonstrating both effectiveness and practical utility. This work furnishes a formal foundation and computational toolkit for counterfactual reasoning in explainable AI.
📝 Abstract
We define several canonical problems related to contrastive explanations, each answering a question of the form ''Why P but not Q?''. The problems compute causes for both P and Q, explicitly comparing their differences. We investigate the basic properties of our definitions in the setting of propositional logic. We show, inter alia, that our framework captures a cardinality-minimal version of existing contrastive explanations in the literature. Furthermore, we provide an extensive analysis of the computational complexities of the problems. We also implement the problems for CNF-formulas using answer set programming and present several examples demonstrating how they work in practice.