🤖 AI Summary
This work addresses the poor interpretability and lack of structured representation in posterior explanation methods (e.g., SHAP, LIME) for time series classification. We propose a rule-based explanation framework that (i) extracts localized attributions via weighted selection and Lasso regularization to mitigate the Rashomon effect by aggregating outputs from multiple explainers; (ii) incorporates an expert-system-inspired rule fusion mechanism to balance coverage, confidence, and conciseness; and (iii) designs visualization-assisted strategies to optimize the specificity–generality trade-off. Experiments on UCI time series datasets demonstrate that the generated structured rule sets achieve explanation quality comparable to native rule-based methods (e.g., Anchor), while significantly improving readability, scalability, and cross-sample adaptability.
📝 Abstract
Explaining machine learning (ML) models for time series (TS) classification is challenging due to inherent difficulty in raw time series interpretation and doubled down by the high dimensionality. We propose a framework that converts numeric feature attributions from post-hoc, instance-wise explainers (e.g., LIME, SHAP) into structured, human-readable rules. These rules define intervals indicating when and where they apply, improving transparency. Our approach performs comparably to native rule-based methods like Anchor while scaling better to long TS and covering more instances. Rule fusion integrates rule sets through methods such as weighted selection and lasso-based refinement to balance coverage, confidence, and simplicity, ensuring all instances receive an unambiguous, metric-optimized rule. It enhances explanations even for a single explainer. We introduce visualization techniques to manage specificity-generalization trade-offs. By aligning with expert-system principles, our framework consolidates conflicting or overlapping explanations - often resulting from the Rashomon effect - into coherent and domain-adaptable insights. Experiments on UCI datasets confirm that the resulting rule-based representations improve interpretability, decision transparency, and practical applicability for TS classification.