Explanation Space: A New Perspective into Time Series Interpretability

📅 2024-09-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address core challenges in time-series interpretability—such as feature invisibility and ambiguous baseline definition—this paper introduces the “explanation space” paradigm: it shifts model decision explanations from the time domain to multiple semantically rich signal spaces (e.g., frequency, phase, and scale domains), thereby circumventing inherent time-domain limitations. The approach is model-agnostic, requiring no architectural modification or retraining, and enables plug-and-play cross-domain interpretation. Leveraging signal transforms—including FFT, wavelet, and Hilbert transforms—we construct five specialized explanation spaces, each tailored to distinct time-series characteristics, and seamlessly integrate with mainstream XAI methods (e.g., Grad-CAM, SHAP). Extensive evaluation on medical and industrial time-series benchmarks demonstrates substantial improvements in explanation fidelity and human interpretability, while preserving original model prediction accuracy without compromise.

Technology Category

Application Category

📝 Abstract
Human understandable explanation of deep learning models is essential for various critical and sensitive applications. Unlike image or tabular data where the importance of each input feature (for the classifier's decision) can be directly projected into the input, time series distinguishable features (e.g. dominant frequency) are often hard to manifest in time domain for a user to easily understand. Additionally, most explanation methods require a baseline value as an indication of the absence of any feature. However, the notion of lack of feature, which is often defined as black pixels for vision tasks or zero/mean values for tabular data, is not well-defined in time series. Despite the adoption of explainable AI methods (XAI) from tabular and vision domain into time series domain, these differences limit the application of these XAI methods in practice. In this paper, we propose a simple yet effective method that allows a model originally trained on the time domain to be interpreted in other explanation spaces using existing methods. We suggest five explanation spaces, each of which can potentially alleviate these issues in certain types of time series. Our method can be easily integrated into existing platforms without any changes to trained models or XAI methods. The code will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Time series features are hard to interpret in time domain
Lack of well-defined baseline values for time series explanations
Existing XAI methods are limited for time series applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpret time series in explanation spaces
Five explanation spaces for diverse time series
Integrates with existing models and XAI methods
🔎 Similar Papers
No similar papers found.