WinTSR: A Windowed Temporal Saliency Rescaling Method for Interpreting Time Series Deep Learning Models

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing time-series interpretability methods primarily target classification tasks and rely on synthetic data, custom baselines, or auxiliary trained models, limiting their ability to capture dynamic temporal dependencies and scale feature importance along the time dimension. This work proposes the first open-source interpretability framework tailored for time-series Transformers and foundation models. Our method introduces a novel windowed temporal saliency recalibration mechanism that explicitly models inter-step historical dependencies while jointly learning time-step importance weights. It integrates gradient- and perturbation-based local attribution, and is rigorously evaluated via sliding-window analysis across multiple architectures. Extensive experiments on three real-world datasets and five state-of-the-art models—including time-series foundation models—demonstrate consistent superiority over ten leading baselines, significantly improving both the accuracy and robustness of local explanations.

Technology Category

Application Category

📝 Abstract
Interpreting complex time series forecasting models is challenging due to the temporal dependencies between time steps and the dynamic relevance of input features over time. Existing interpretation methods are limited by focusing mostly on classification tasks, evaluating using custom baseline models instead of the latest time series models, using simple synthetic datasets, and requiring training another model. We introduce a novel interpretation method, extit{Windowed Temporal Saliency Rescaling (WinTSR)} addressing these limitations. WinTSR explicitly captures temporal dependencies among the past time steps and efficiently scales the feature importance with this time importance. We benchmark WinTSR against 10 recent interpretation techniques with 5 state-of-the-art deep-learning models of different architectures, including a time series foundation model. We use 3 real-world datasets for both time-series classification and regression. Our comprehensive analysis shows that WinTSR significantly outperforms other local interpretation methods in overall performance. Finally, we provide a novel, open-source framework to interpret the latest time series transformers and foundation models.
Problem

Research questions and friction points this paper is trying to address.

Interprets time series deep learning models
Captures temporal dependencies in time steps
Outperforms existing local interpretation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Captures temporal dependencies effectively
Scales feature importance dynamically
Open-source framework for latest models
🔎 Similar Papers
No similar papers found.