SAGE: Scale-Aware Gradual Evolution for Continual Knowledge Graph Embedding

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of varying update scales in dynamically evolving knowledge graphs and the lack of scale adaptability and systematic evaluation in existing methods, this paper proposes SAGE, a Scale-Aware Progressive Evolution framework. Methodologically, SAGE introduces, for the first time, an update-scale-aware mechanism that dynamically adjusts embedding dimensionality according to the volume of newly injected facts. It further designs a scale-driven embedding space expansion strategy and a dynamic knowledge distillation technique, jointly ensuring historical knowledge stability while enabling efficient integration of new knowledge. Extensive experiments across seven benchmark datasets demonstrate that SAGE consistently outperforms fixed-dimension baselines across all temporal snapshots: average MRR, Hits@1, and Hits@10 improve by 1.38%, 1.25%, and 1.60%, respectively. These results validate the effectiveness and generalizability of scale-adaptive modeling for dynamic knowledge graph embedding.

Technology Category

Application Category

📝 Abstract
Traditional knowledge graph (KG) embedding methods aim to represent entities and relations in a low-dimensional space, primarily focusing on static graphs. However, real-world KGs are dynamically evolving with the constant addition of entities, relations and facts. To address such dynamic nature of KGs, several continual knowledge graph embedding (CKGE) methods have been developed to efficiently update KG embeddings to accommodate new facts while maintaining learned knowledge. As KGs grow at different rates and scales in real-world scenarios, existing CKGE methods often fail to consider the varying scales of updates and lack systematic evaluation throughout the entire update process. In this paper, we propose SAGE, a scale-aware gradual evolution framework for CKGE. Specifically, SAGE firstly determine the embedding dimensions based on the update scales and expand the embedding space accordingly. The Dynamic Distillation mechanism is further employed to balance the preservation of learned knowledge and the incorporation of new facts. We conduct extensive experiments on seven benchmarks, and the results show that SAGE consistently outperforms existing baselines, with a notable improvement of 1.38% in MRR, 1.25% in H@1 and 1.6% in H@10. Furthermore, experiments comparing SAGE with methods using fixed embedding dimensions show that SAGE achieves optimal performance on every snapshot, demonstrating the importance of adaptive embedding dimensions in CKGE. The codes of SAGE are publicly available at: https://github.com/lyfxjtu/Dynamic-Embedding.
Problem

Research questions and friction points this paper is trying to address.

Addresses dynamic evolution of knowledge graphs with varying scales.
Balances preserving learned knowledge and incorporating new facts.
Improves performance by adapting embedding dimensions to update scales.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-aware embedding dimension adjustment
Dynamic Distillation for knowledge balance
Gradual evolution framework for CKGE
🔎 Similar Papers
No similar papers found.
Y
Yifei Li
Xi’an Jiaotong University, School of Computer Science and Technology, Xi’an, Shaanxi, China
Lingling Zhang
Lingling Zhang
Assistant Professor, Xi'an Jiaotong University
Computer visionFew-shot learningZero-shot learning
H
Hang Yan
Xi’an Jiaotong University, School of Computer Science and Technology, Xi’an, Shaanxi, China
Tianzhe Zhao
Tianzhe Zhao
Xi'an Jiaotong University | The University of Manchester
knowledge representation learning
Zihan Ma
Zihan Ma
Xi'an Jiaotong University
NLPSocial NetworkMulti Modal Learning
M
Muye Huang
Xi’an Jiaotong University, School of Computer Science and Technology, Xi’an, Shaanxi, China
J
Jun Liu
Xi’an Jiaotong University, School of Computer Science and Technology, Xi’an, Shaanxi, China