🤖 AI Summary
This study systematically characterizes the parameterized complexity of abductive and contrastive explanations—including both local and global variants—for transparent machine learning models (decision trees, decision sets, decision lists, Boolean circuits, and their combinations). Using formal logical modeling and parameterized complexity theory, we provide the first unified classification of explainability problems across multiple model classes, precisely delineating their computational boundaries: identifying cases solvable in polynomial time, fixed-parameter tractable (FPT) cases, and those that are W[1]- or W[2]-hard. Our core contribution is the establishment of the first comprehensive parameterized complexity spectrum spanning diverse explanation types and model structures. This spectrum reveals a fundamental tension between model transparency and explanation computability, thereby providing a rigorous theoretical foundation for algorithm design and evaluation in explainable AI.
📝 Abstract
This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.