🤖 AI Summary
To address the challenges of varying update scales in dynamically evolving knowledge graphs and the lack of scale adaptability and systematic evaluation in existing methods, this paper proposes SAGE, a Scale-Aware Progressive Evolution framework. Methodologically, SAGE introduces, for the first time, an update-scale-aware mechanism that dynamically adjusts embedding dimensionality according to the volume of newly injected facts. It further designs a scale-driven embedding space expansion strategy and a dynamic knowledge distillation technique, jointly ensuring historical knowledge stability while enabling efficient integration of new knowledge. Extensive experiments across seven benchmark datasets demonstrate that SAGE consistently outperforms fixed-dimension baselines across all temporal snapshots: average MRR, Hits@1, and Hits@10 improve by 1.38%, 1.25%, and 1.60%, respectively. These results validate the effectiveness and generalizability of scale-adaptive modeling for dynamic knowledge graph embedding.
📝 Abstract
Traditional knowledge graph (KG) embedding methods aim to represent entities and relations in a low-dimensional space, primarily focusing on static graphs. However, real-world KGs are dynamically evolving with the constant addition of entities, relations and facts. To address such dynamic nature of KGs, several continual knowledge graph embedding (CKGE) methods have been developed to efficiently update KG embeddings to accommodate new facts while maintaining learned knowledge. As KGs grow at different rates and scales in real-world scenarios, existing CKGE methods often fail to consider the varying scales of updates and lack systematic evaluation throughout the entire update process. In this paper, we propose SAGE, a scale-aware gradual evolution framework for CKGE. Specifically, SAGE firstly determine the embedding dimensions based on the update scales and expand the embedding space accordingly. The Dynamic Distillation mechanism is further employed to balance the preservation of learned knowledge and the incorporation of new facts. We conduct extensive experiments on seven benchmarks, and the results show that SAGE consistently outperforms existing baselines, with a notable improvement of 1.38% in MRR, 1.25% in H@1 and 1.6% in H@10. Furthermore, experiments comparing SAGE with methods using fixed embedding dimensions show that SAGE achieves optimal performance on every snapshot, demonstrating the importance of adaptive embedding dimensions in CKGE. The codes of SAGE are publicly available at: https://github.com/lyfxjtu/Dynamic-Embedding.