Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection

📅 2024-06-27
🏛️ PHM Society European Conference
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoencoder-based multivariate time-series anomaly detection in mechatronic systems suffers from poor interpretability due to its “black-box” nature. Method: This paper proposes an interpretable framework integrating differentiable feature selection with gradient-driven counterfactual generation—the first application of counterfactual explanation to autoencoder-based time-series anomaly detection. The framework ensures validity, sparsity, and plausibility of explanations by identifying critical sensor channels via a feature selection module and generating sparse, semantically meaningful, and realistic counterfactual sequences through gradient-based optimization. Contribution/Results: Evaluated on the SKAB benchmark and multiple real-world industrial datasets, the method reduces the number of explanatory signals by over 35% while achieving an average explanation validity of 92.1%, significantly outperforming conventional attribution methods. It substantially enhances model trustworthiness and debuggability without compromising detection performance.

Technology Category

Application Category

📝 Abstract
The complexity of modern electro-mechanical systems require the development of sophisticated diagnostic methods like anomaly detection capable of detecting deviations. Conventional anomaly detection approaches like signal processing and statistical modelling often struggle to effectively handle the intricacies of complex systems, particularly when dealing with multi-variate signals. In contrast, neural network-based anomaly detection methods, especially Auto-Encoders, have emerged as a compelling alternative, demonstrating remarkable performance. However, Auto-Encoders exhibit inherent opaqueness in their decision-making processes, hindering their practical implementation at scale. Addressing this opacity is essential for enhancing the interpretability and trustworthiness of anomaly detection models. In this work, we address this challenge by employing a feature selector to select features and counterfactual explanations to give a context to the model output. We tested this approach on the SKAB benchmark dataset and an industrial time-series dataset. The gradient based counterfactual explanation approach was evaluated via validity, sparsity and distance measures. Our experimental findings illustrate that our proposed counterfactual approach can offer meaningful and valuable insights into the model decision-making process, by explaining fewer signals compared to conventional approaches. These insights enhance the trustworthiness and interpretability of anomaly detection models.
Problem

Research questions and friction points this paper is trying to address.

Autoencoder
Time Series
Anomaly Detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved Autoencoder
Feature Selection
Counterfactual Explanations
🔎 Similar Papers
Abhishek Srinivasan
Abhishek Srinivasan
KTH, Scania
V
Varun Singapuri Ravi
Connected Systems, Scania CV AB, Södertälje, Sweden; Linköping University, Sweden
J
J. C. Andresen
Connected Systems, Scania CV AB, Södertälje, Sweden
A
Anders Holst
RISE Research Institutes of Sweden, Stockholm, Sweden; KTH Royal Institute of Technology, Stockholm, Sweden