🤖 AI Summary
Interpretability of time-series models is critical for trustworthy deployment and debugging, yet existing methods struggle to balance accuracy and conciseness at the subsequence level. This paper introduces Implet, a novel post-hoc subsequence explanation framework that pioneers an importance-driven explanation paradigm and incorporates a cohort-based aggregation mechanism to enhance explanation compactness and readability. Implet integrates gradient sensitivity analysis, sliding-window perturbation evaluation, and group-wise consistency clustering to enable fine-grained identification of critical temporal segments. Evaluated on multiple standard time-series classification benchmarks, Implet achieves high alignment with human expert annotations (average F1 score of 0.82), substantially outperforming baseline methods. The implementation is publicly available.
📝 Abstract
Explainability in time series models is crucial for fostering trust, facilitating debugging, and ensuring interpretability in real-world applications. In this work, we introduce Implet, a novel post-hoc explainer that generates accurate and concise subsequence-level explanations for time series models. Our approach identifies critical temporal segments that significantly contribute to the model's predictions, providing enhanced interpretability beyond traditional feature-attribution methods. Based on it, we propose a cohort-based (group-level) explanation framework designed to further improve the conciseness and interpretability of our explanations. We evaluate Implet on several standard time-series classification benchmarks, demonstrating its effectiveness in improving interpretability. The code is available at https://github.com/LbzSteven/implet