On the Necessity of Multi-Domain Explanation: An Uncertainty Principle Approach for Deep Time Series Models

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mainstream time-series explainability methods generate attributions solely in the time domain, neglecting complementary perspectives such as the frequency domain, leading to incomplete and potentially misleading interpretations. Method: The authors pioneer the integration of the signal-processing uncertainty principle into eXplainable AI (XAI), establishing a theoretical criterion for time-frequency attribution consistency. They prove that single-domain attribution suffers from inherent limitations and derive a fundamental lower bound for multi-domain explanation. Methodologically, they unify Fourier analysis with XAI techniques—including Grad-CAM and Integrated Gradients—to enable cross-domain attribution evaluation. Results: Systematic experiments across diverse time-series models, XAI methods, and tasks (classification and forecasting) reveal pervasive time-frequency attribution inconsistency—frequent violations of the uncertainty principle—validating the severe inadequacy of unidomain explanations. The core contribution is the formal articulation and empirical validation of the necessity and feasibility of multi-domain collaborative explanation, thereby establishing a novel theoretical foundation for time-series XAI.

Technology Category

Application Category

📝 Abstract
A prevailing approach to explain time series models is to generate attribution in time domain. A recent development in time series XAI is the concept of explanation spaces, where any model trained in the time domain can be interpreted with any existing XAI method in alternative domains, such as frequency. The prevailing approach is to present XAI attributions either in the time domain or in the domain where the attribution is most sparse. In this paper, we demonstrate that in certain cases, XAI methods can generate attributions that highlight fundamentally different features in the time and frequency domains that are not direct counterparts of one another. This suggests that both domains' attributions should be presented to achieve a more comprehensive interpretation. Thus it shows the necessity of multi-domain explanation. To quantify when such cases arise, we introduce the uncertainty principle (UP), originally developed in quantum mechanics and later studied in harmonic analysis and signal processing, to the XAI literature. This principle establishes a lower bound on how much a signal can be simultaneously localized in both the time and frequency domains. By leveraging this concept, we assess whether attributions in the time and frequency domains violate this bound, indicating that they emphasize distinct features. In other words, UP provides a sufficient condition that the time and frequency domain explanations do not match and, hence, should be both presented to the end user. We validate the effectiveness of this approach across various deep learning models, XAI methods, and a wide range of classification and forecasting datasets. The frequent occurrence of UP violations across various datasets and XAI methods highlights the limitations of existing approaches that focus solely on time-domain explanations. This underscores the need for multi-domain explanations as a new paradigm.
Problem

Research questions and friction points this paper is trying to address.

Explaining deep time series models requires multi-domain attribution.
Time and frequency domain explanations often highlight different features.
Uncertainty principle quantifies when multi-domain explanations are necessary.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-domain explanation for time series models
Uncertainty principle to assess domain attributions
Validation across diverse models and datasets
🔎 Similar Papers
No similar papers found.
Shahbaz Rezaei
Shahbaz Rezaei
University of California at Davis
Explainable AIMachine Learning SecurityComputer NetworksPerformance Evaluation
A
A. Halev
University of California Davis, CA, USA
X
Xin Liu
University of California Davis, CA, USA