🤖 AI Summary
Existing heterogeneous temporal graph neural networks (HTGNNs) commonly adopt a decoupled spatiotemporal modeling paradigm, resulting in weak spatiotemporal interaction and high model complexity. To address this, we propose SE-HTGNN—a lightweight, end-to-end framework that tightly integrates spatial and temporal information. First, we design a history-guided dynamic attention mechanism that explicitly encodes temporal dependencies within spatial message passing. Second, we incorporate large language models (LLMs) via prompt learning to extract semantic priors of node types, thereby enhancing structural awareness. SE-HTGNN achieves state-of-the-art prediction accuracy while accelerating inference by up to 10× and significantly reducing computational overhead. Its core innovations lie in (i) internalizing temporal modeling into the spatial learning process, and (ii) the first systematic integration of LLM-derived semantic priors into heterogeneous temporal graph representation learning.
📝 Abstract
Heterogeneous temporal graphs (HTGs) are ubiquitous data structures in the real world. Recently, to enhance representation learning on HTGs, numerous attention-based neural networks have been proposed. Despite these successes, existing methods rely on a decoupled temporal and spatial learning paradigm, which weakens interactions of spatio-temporal information and leads to a high model complexity. To bridge this gap, we propose a novel learning paradigm for HTGs called Simple and Efficient Heterogeneous Temporal Graph N}eural Network (SE-HTGNN). Specifically, we innovatively integrate temporal modeling into spatial learning via a novel dynamic attention mechanism, which retains attention information from historical graph snapshots to guide subsequent attention computation, thereby improving the overall discriminative representations learning of HTGs. Additionally, to comprehensively and adaptively understand HTGs, we leverage large language models to prompt SE-HTGNN, enabling the model to capture the implicit properties of node types as prior knowledge. Extensive experiments demonstrate that SE-HTGNN achieves up to 10x speed-up over the state-of-the-art and latest baseline while maintaining the best forecasting accuracy.