🤖 AI Summary
Existing graph state-space models (SSMs) effectively capture long-range graph dependencies but struggle to satisfy essential mathematical properties—such as linearity and stationarity—required for sequence modeling, limiting their applicability in time-series forecasting and dynamic graph analysis. To address this, we propose GRAMA (Graph ARMA), the first framework to incorporate learnable autoregressive moving-average (ARMA) filtering into graph modeling while strictly preserving permutation equivariance. GRAMA employs a selective attention mechanism to dynamically estimate time-varying ARMA coefficients, enabling high-order, learnable, and equivariant graph signal propagation. Theoretically, we establish an exact equivalence between GRAMA and selective SSMs. Empirically, GRAMA consistently outperforms both GNN- and graph-SSM-based baselines across 14 synthetic and real-world datasets, achieving state-of-the-art performance in graph-aware time-series forecasting and dynamic graph modeling.
📝 Abstract
Graph State Space Models (SSMs) have recently been introduced to enhance Graph Neural Networks (GNNs) in modeling long-range interactions. Despite their success, existing methods either compromise on permutation equivariance or limit their focus to pairwise interactions rather than sequences. Building on the connection between Autoregressive Moving Average (ARMA) and SSM, in this paper, we introduce GRAMA, a Graph Adaptive method based on a learnable Autoregressive Moving Average (ARMA) framework that addresses these limitations. By transforming from static to sequential graph data, GRAMA leverages the strengths of the ARMA framework, while preserving permutation equivariance. Moreover, GRAMA incorporates a selective attention mechanism for dynamic learning of ARMA coefficients, enabling efficient and flexible long-range information propagation. We also establish theoretical connections between GRAMA and Selective SSMs, providing insights into its ability to capture long-range dependencies. Extensive experiments on 14 synthetic and real-world datasets demonstrate that GRAMA consistently outperforms backbone models and performs competitively with state-of-the-art methods.