🤖 AI Summary
To bridge the semantic gap between knowledge graph (KG) structural representations and large language model (LLM) natural language representations, this paper proposes a two-stage self-supervised quantization framework. First, it employs a vector-quantized variational autoencoder (VQ-VAE)-inspired approach to compress KG entities into highly discriminative discrete tokens. Second, these quantized tokens are directly injected as instruction features into LLMs (LLaMA2/3.1), eliminating the need for lengthy prompts or adapter-based fine-tuning. The work introduces, for the first time, KG-specific instruction data construction and joint structural–semantic modeling. Evaluated on link prediction and triplet classification, it significantly outperforms unsupervised baselines. Remarkably, optimal performance is achieved with only 16 tokens per entity—substantially reducing computational overhead and inference latency compared to thousand-token prompting strategies.
📝 Abstract
Due to the presence of the natural gap between Knowledge Graph (KG) structures and the natural language, the effective integration of holistic structural information of KGs with Large Language Models (LLMs) has emerged as a significant question. To this end, we propose a two-stage framework to learn and apply quantized codes for each entity, aiming for the seamless integration of KGs with LLMs. Firstly, a self-supervised quantized representation (SSQR) method is proposed to compress both KG structural and semantic knowledge into discrete codes (ie, tokens) that align the format of language sentences. We further design KG instruction-following data by viewing these learned codes as features to directly input to LLMs, thereby achieving seamless integration. The experiment results demonstrate that SSQR outperforms existing unsupervised quantized methods, producing more distinguishable codes. Further, the fine-tuned LLaMA2 and LLaMA3.1 also have superior performance on KG link prediction and triple classification tasks, utilizing only 16 tokens per entity instead of thousands in conventional prompting methods.