Exploring Neural Granger Causality with xLSTMs: Unveiling Temporal Dependencies in Complex Data

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of modeling long-range causal dependencies in nonlinear time series. We propose Granger Causal xLSTM (GC-xLSTM), the first framework integrating extended Long Short-Term Memory (xLSTM) networks with the Granger causality paradigm. To identify lagged causal variables adaptively, we introduce a dynamic Lasso-based sparse projection mechanism. Furthermore, we design an end-to-end jointly optimized architecture that preserves causal interpretability while enhancing robustness. Extensive experiments on three benchmark datasets demonstrate that GC-xLSTM significantly outperforms classical Granger methods and state-of-the-art RNN- and Transformer-based baselines. It accurately captures nonlinear, long-range causal relationships, achieving superior forecasting accuracy, strong interpretability of inferred causal structures, and improved stability under distributional shifts.

Technology Category

Application Category

📝 Abstract
Causality in time series can be difficult to determine, especially in the presence of non-linear dependencies. The concept of Granger causality helps analyze potential relationships between variables, thereby offering a method to determine whether one time series can predict-Granger cause-future values of another. Although successful, Granger causal methods still struggle with capturing long-range relations between variables. To this end, we leverage the recently successful Extended Long Short-Term Memory (xLSTM) architecture and propose Granger causal xLSTMs (GC-xLSTM). It first enforces sparsity between the time series components by using a novel dynamic lass penalty on the initial projection. Specifically, we adaptively improve the model and identify sparsity candidates. Our joint optimization procedure then ensures that the Granger causal relations are recovered in a robust fashion. Our experimental evaluations on three datasets demonstrate the overall efficacy of our proposed GC-xLSTM model.
Problem

Research questions and friction points this paper is trying to address.

Determining causality in time series with non-linear dependencies.
Capturing long-range relations between variables in Granger causality.
Improving model robustness and sparsity in identifying causal relations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended Long Short-Term Memory (xLSTM) architecture
Dynamic lasso penalty for sparsity enforcement
Joint optimization for robust causality recovery
🔎 Similar Papers
No similar papers found.
Harsh Poonia
Harsh Poonia
Carnegie Mellon University
Machine Learning
Felix Divo
Felix Divo
TU Darmstadt, Germany
Machine Learning
K
K. Kersting
AI & ML Group, Technische Universität Darmstadt, Darmstadt, Germany; Hessian Center for AI (hessian.AI), Darmstadt, Germany; Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany; German Research Center for AI (DFKI), Darmstadt, Germany
D
D. Dhami
Uncertainty in AI Group, Eindhoven University of Technology, Eindhoven, Netherlands