🤖 AI Summary
This work addresses the high computational complexity and instability inherent in counterfactual reasoning within probabilistic logic programming frameworks such as ProbLog. To overcome these challenges, the authors propose a program transformation method grounded in a weak independence assumption, which reformulates counterfactual queries into Single-World Intervention Programs (SWIPs). By structurally decomposing original clauses into observed and fixed components, the approach reduces counterfactual inference to smaller-scale marginal inference tasks. This transformation preserves correctness while substantially lowering computational overhead, and it is applicable to a broad class of structural causal models. Experimental results demonstrate that, compared to existing methods, the proposed technique reduces inference time by an average of 35%, significantly enhancing both the efficiency and reliability of counterfactual reasoning.
📝 Abstract
Probabilistic Logic Programming (PLP) languages, like ProbLog, naturally support reasoning under uncertainty, while maintaining a declarative and interpretable framework. Meanwhile, counterfactual reasoning (i.e., answering ``what if'' questions) is critical for ensuring AI systems are robust and trustworthy; however, integrating this capability into PLP can be computationally prohibitive and unstable in accuracy. This paper addresses this challenge, by proposing an efficient program transformation for counterfactuals as Single World Intervention Programs (SWIPs) in ProbLog. By systematically splitting ProbLog clauses to observed and fixed components relevant to a counterfactual, we create a transformed program that (1) does not asymptotically exceed the computational complexity of existing methods, and is strictly smaller in common cases, and (2) reduces counterfactual reasoning to marginal inference over a simpler program. We formally prove the correctness of our approach, which relies on a weaker set independence assumptions and is consistent with conditional independencies, showing the resulting marginal probabilities match the counterfactual distributions of the underlying Structural Causal Model in wide domains. Our method achieves a 35\% reduction in inference time versus existing methods in extensive experiments. This work makes complex counterfactual reasoning more computationally tractable and reliable, providing a crucial step towards developing more robust and explainable AI systems. The code is at https://github.com/EVIEHub/swip.