🤖 AI Summary
Existing methods for generating unsatisfiability explanations in constraint solving suffer from low efficiency, hindering practical deployment in explainable AI.
Method: This paper proposes an abstraction-proof framework grounded in verifiable solver proofs, enabling efficient translation of formal proofs into stepwise, human-understandable explanations. We design a lightweight abstraction mechanism to construct a logically tractable proof skeleton and introduce a serialization algorithm enhanced with semantic-aware pruning and equivalence-based simplification, substantially reducing explanation length and logical complexity.
Contribution/Results: Our approach achieves a 10×–100× speedup over state-of-the-art methods while preserving both explanation accuracy and intelligibility. Crucially, it is the first work to systematically transform verifiable proofs into compact, interpretable inference chains—establishing a novel paradigm for formal reasoning in explainable AI.
📝 Abstract
In the field of Explainable Constraint Solving, it is common to explain to a user why a problem is unsatisfiable. A recently proposed method for this is to compute a sequence of explanation steps. Such a step-wise explanation shows individual reasoning steps involving constraints from the original specification, that in the end explain a conflict. However, computing a step-wise explanation is computationally expensive, limiting the scope of problems for which it can be used. We investigate how we can use proofs generated by a constraint solver as a starting point for computing step-wise explanations, instead of computing them step-by-step. More specifically, we define a framework of abstract proofs, in which both proofs and step-wise explanations can be represented. We then propose several methods for converting a proof to a step-wise explanation sequence, with special attention to trimming and simplification techniques to keep the sequence and its individual steps small. Our results show our method significantly speeds up the generation of step-wise explanation sequences, while the resulting step-wise explanation has a quality similar to the current state-of-the-art.