From SHAP to Rules: Distilling Expert Knowledge from Post-hoc Model Explanations in Time Series Classification

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor interpretability and lack of structured representation in posterior explanation methods (e.g., SHAP, LIME) for time series classification. We propose a rule-based explanation framework that (i) extracts localized attributions via weighted selection and Lasso regularization to mitigate the Rashomon effect by aggregating outputs from multiple explainers; (ii) incorporates an expert-system-inspired rule fusion mechanism to balance coverage, confidence, and conciseness; and (iii) designs visualization-assisted strategies to optimize the specificity–generality trade-off. Experiments on UCI time series datasets demonstrate that the generated structured rule sets achieve explanation quality comparable to native rule-based methods (e.g., Anchor), while significantly improving readability, scalability, and cross-sample adaptability.

Technology Category

Application Category

📝 Abstract
Explaining machine learning (ML) models for time series (TS) classification is challenging due to inherent difficulty in raw time series interpretation and doubled down by the high dimensionality. We propose a framework that converts numeric feature attributions from post-hoc, instance-wise explainers (e.g., LIME, SHAP) into structured, human-readable rules. These rules define intervals indicating when and where they apply, improving transparency. Our approach performs comparably to native rule-based methods like Anchor while scaling better to long TS and covering more instances. Rule fusion integrates rule sets through methods such as weighted selection and lasso-based refinement to balance coverage, confidence, and simplicity, ensuring all instances receive an unambiguous, metric-optimized rule. It enhances explanations even for a single explainer. We introduce visualization techniques to manage specificity-generalization trade-offs. By aligning with expert-system principles, our framework consolidates conflicting or overlapping explanations - often resulting from the Rashomon effect - into coherent and domain-adaptable insights. Experiments on UCI datasets confirm that the resulting rule-based representations improve interpretability, decision transparency, and practical applicability for TS classification.
Problem

Research questions and friction points this paper is trying to address.

Convert numeric feature attributions into human-readable rules
Balance coverage, confidence, and simplicity in rule fusion
Improve interpretability and transparency in time series classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts SHAP attributions to human-readable rules
Uses rule fusion for balanced coverage and confidence
Visualization techniques manage specificity-generalization trade-offs
🔎 Similar Papers
No similar papers found.
M
Maciej Mozolewski
Jagiellonian Human-Centered AI Lab, Mark Kac Center for Complex Systems Research, Jagiellonian University; Department of Human-Centered Artificial Intelligence, Institute of Applied Computer Science, Jagiellonian University
Szymon Bobek
Szymon Bobek
Jagiellonian University
explainable artificial intelligence (XAI)artificial intelligencemachine learningcontext aware systemsknowledge engineeri
Grzegorz J. Nalepa
Grzegorz J. Nalepa
Jagiellonian University, Kraków, Poland
Artificial IntelligenceKnowledge EngineeringExplainable AIData MiningAffective Computing