Implet: A Post-hoc Subsequence Explainer for Time Series Models

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Interpretability of time-series models is critical for trustworthy deployment and debugging, yet existing methods struggle to balance accuracy and conciseness at the subsequence level. This paper introduces Implet, a novel post-hoc subsequence explanation framework that pioneers an importance-driven explanation paradigm and incorporates a cohort-based aggregation mechanism to enhance explanation compactness and readability. Implet integrates gradient sensitivity analysis, sliding-window perturbation evaluation, and group-wise consistency clustering to enable fine-grained identification of critical temporal segments. Evaluated on multiple standard time-series classification benchmarks, Implet achieves high alignment with human expert annotations (average F1 score of 0.82), substantially outperforming baseline methods. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Explainability in time series models is crucial for fostering trust, facilitating debugging, and ensuring interpretability in real-world applications. In this work, we introduce Implet, a novel post-hoc explainer that generates accurate and concise subsequence-level explanations for time series models. Our approach identifies critical temporal segments that significantly contribute to the model's predictions, providing enhanced interpretability beyond traditional feature-attribution methods. Based on it, we propose a cohort-based (group-level) explanation framework designed to further improve the conciseness and interpretability of our explanations. We evaluate Implet on several standard time-series classification benchmarks, demonstrating its effectiveness in improving interpretability. The code is available at https://github.com/LbzSteven/implet
Problem

Research questions and friction points this paper is trying to address.

Explain subsequence-level predictions in time series models
Identify critical temporal segments for model interpretability
Provide concise cohort-based explanations for time series
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-hoc subsequence explainer for time series
Identifies critical temporal segments for predictions
Cohort-based framework enhances explanation interpretability
🔎 Similar Papers
No similar papers found.
F
Fanyu Meng
Department of Computer Science, UC Davis
Z
Ziwen Kan
Department of Computer Science, UC Davis
Shahbaz Rezaei
Shahbaz Rezaei
University of California at Davis
Explainable AIMachine Learning SecurityComputer NetworksPerformance Evaluation
Zhaodan Kong
Zhaodan Kong
Associate Professor, University of California, Davis
human-autonomy teaminguncrewed aerial systemstrustworthy CPS/AIformal methods and control
X
Xin Chen
College of Engineering, Georgia Institute of Technology
X
Xin Liu
Department of Computer Science, UC Davis