🤖 AI Summary
Existing XAI methods for time series provide only point- or segment-level attributions, failing to reveal the causal influence of high-level semantic patterns—such as seasonality, trends, and anomalies—on model decisions. To address this gap, we propose C-SHAP, the first concept-level SHAP framework for time series, which extends SHAP theory to interpretable semantic concepts. C-SHAP integrates STL decomposition, concept modeling, and axiomatic attribution into a modular pipeline for quantifying concept-level contributions. This enables an explanatory leap from low-level signals to human-understandable semantics. Evaluated on energy load forecasting, C-SHAP significantly improves expert trust and diagnostic efficiency. Its explanations are physically meaningful, model-agnostic, and empirically verifiable—demonstrating strong generalization across diverse forecasting models while preserving fidelity to underlying data dynamics.
📝 Abstract
Time series are ubiquitous in domains such as energy forecasting, healthcare, and industry. Using AI systems, some tasks within these domains can be efficiently handled. Explainable AI (XAI) aims to increase the reliability of AI solutions by explaining model reasoning. For time series, many XAI methods provide point- or sequence-based attribution maps. These methods explain model reasoning in terms of low-level patterns. However, they do not capture high-level patterns that may also influence model reasoning. We propose a concept-based method to provide explanations in terms of these high-level patterns. In this paper, we present C-SHAP for time series, an approach which determines the contribution of concepts to a model outcome. We provide a general definition of C-SHAP and present an example implementation using time series decomposition. Additionally, we demonstrate the effectiveness of the methodology through a use case from the energy domain.