A Transfer Framework for Enhancing Temporal Graph Learning in Data-Scarce Settings

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Temporal Graph Neural Networks (TGNNs) suffer from performance degradation under scarce training data, limiting their practical deployment. Method: This paper proposes a transferable, structured bipartite encoding framework tailored for TGNNs, which—uniquely—decouples node memory from feature representation. It introduces a reusable memory module and inductive patterns across datasets, enabling cross-domain knowledge transfer. The framework leverages unsupervised pretraining followed by low-resource fine-tuning to facilitate efficient knowledge transfer across heterogeneous temporal graphs. Contribution/Results: On real-world benchmarks under low-data regimes, our approach achieves a 56% improvement over non-transfer baselines and outperforms existing transfer methods by 36%. It effectively addresses the critical limitation of poor cross-dataset generalizability in TGNNs, significantly advancing their applicability in data-scarce scenarios.

Technology Category

Application Category

📝 Abstract
Dynamic interactions between entities are prevalent in domains like social platforms, financial systems, healthcare, and e-commerce. These interactions can be effectively represented as time-evolving graphs, where predicting future connections is a key task in applications such as recommendation systems. Temporal Graph Neural Networks (TGNNs) have achieved strong results for such predictive tasks but typically require extensive training data, which is often limited in real-world scenarios. One approach to mitigating data scarcity is leveraging pre-trained models from related datasets. However, direct knowledge transfer between TGNNs is challenging due to their reliance on node-specific memory structures, making them inherently difficult to adapt across datasets. To address this, we introduce a novel transfer approach that disentangles node representations from their associated features through a structured bipartite encoding mechanism. This decoupling enables more effective transfer of memory components and other learned inductive patterns from one dataset to another. Empirical evaluations on real-world benchmarks demonstrate that our method significantly enhances TGNN performance in low-data regimes, outperforming non-transfer baselines by up to 56% and surpassing existing transfer strategies by 36%
Problem

Research questions and friction points this paper is trying to address.

Enhancing temporal graph learning in data-scarce settings.
Overcoming challenges in transferring knowledge between temporal graph neural networks.
Improving TGNN performance with a novel transfer approach.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured bipartite encoding mechanism for node representation
Decoupling node features for effective knowledge transfer
Enhanced TGNN performance in low-data scenarios
🔎 Similar Papers
No similar papers found.
S
Shubham Gupta
Indian Institute of Technology, Delhi, New Delhi, India
Srikanta Bedathur
Srikanta Bedathur
IIT Delhi
DatabasesInformation RetrievalData Mining