EXCODER: EXplainable Classification Of DiscretE time series Representations

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Deep learning has significantly improved time series classification, yet the lack of explainability in these models remains a major challenge. While Explainable AI (XAI) techniques aim to make model decisions more transparent, their effectiveness is often hindered by the high dimensionality and noise present in raw time series data. In this work, we investigate whether transforming time series into discrete latent representations-using methods such as Vector Quantized Variational Autoencoders (VQ-VAE) and Discrete Variational Autoencoders (DVAE)-not only preserves but enhances explainability by reducing redundancy and focusing on the most informative patterns. We show that applying XAI methods to these compressed representations leads to concise and structured explanations that maintain faithfulness without sacrificing classification performance. Additionally, we propose Similar Subsequence Accuracy (SSA), a novel metric that quantitatively assesses the alignment between XAI-identified salient subsequences and the label distribution in the training data. SSA provides a systematic way to validate whether the features highlighted by XAI methods are truly representative of the learned classification patterns. Our findings demonstrate that discrete latent representations not only retain the essential characteristics needed for classification but also offer a pathway to more compact, interpretable, and computationally efficient explanations in time series analysis.
Problem

Research questions and friction points this paper is trying to address.

time series classification
explainable AI
discrete representations
interpretability
XAI evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

discrete latent representations
explainable AI
time series classification
VQ-VAE
Similar Subsequence Accuracy
🔎 Similar Papers
No similar papers found.
Y
Yannik Hahn
Institute for Technologies and Management of Digital Transformation (TMDT), University of Wuppertal, Rainer-Gruenter-Straße 21, 42119 Wuppertal, Germany
A
Antonin Königsfeld
Institute for Technologies and Management of Digital Transformation (TMDT), University of Wuppertal, Rainer-Gruenter-Straße 21, 42119 Wuppertal, Germany
H
Hasan Tercan
Institute for Technologies and Management of Digital Transformation (TMDT), University of Wuppertal, Rainer-Gruenter-Straße 21, 42119 Wuppertal, Germany
Tobias Meisen
Tobias Meisen
Bergische Universität Wuppertal, previously RWTH Aachen University
Industrial AIDeep LearningDeep Reinforcement LearningSemantic TechnologiesKnowledge Graph