🤖 AI Summary
To address the performance limitations of conventional knowledge graph completion (KGC) methods caused by graph sparsity and the prohibitive computational cost of fine-tuning large language models (LLMs), this paper proposes an efficient, fine-tuning-free KGC framework that operates with frozen LLMs. Our method introduces three key innovations: (1) a novel intermediate-layer hidden-state probing mechanism, guided by prompts to precisely extract semantically rich, layer-specific representations; (2) subgraph-aware entity description generation to enhance local structural semantics; and (3) a lightweight classifier for end-to-end inference. Evaluated on multiple standard benchmarks, our approach matches the accuracy of fine-tuned LLM baselines while reducing GPU memory consumption by 188× and accelerating training and inference by 13.48×. This work significantly alleviates the long-standing accuracy–efficiency trade-off in frozen-LLM-based KGC.
📝 Abstract
Traditional knowledge graph completion (KGC) methods rely solely on structural information, struggling with the inherent sparsity of knowledge graphs (KGs). Large Language Models (LLMs) learn extensive knowledge from large corpora with powerful context modeling, making them promising for mitigating the limitations of previous methods. Directly fine-tuning LLMs offers great capability but comes at the cost of huge time and memory consumption, while utilizing frozen LLMs yields suboptimal results.In this work, we aim to leverage LLMs for KGC effectively and efficiently. We capture the context-aware hidden states of knowledge triples by employing prompts to stimulate the intermediate layers of LLMs. We then train a data-efficient classifier on these hidden states to harness the inherent capabilities of frozen LLMs in KGC. Additionally, to reduce ambiguity and enrich knowledge representation, we generate detailed entity descriptions through subgraph sampling on KGs. Extensive experiments on standard benchmarks demonstrate the efficiency and effectiveness of our approach. We outperform traditional KGC methods across most datasets and, notably, achieve classification performance comparable to fine-tuned LLMs while enhancing GPU memory efficiency by $188 imes$ and accelerating training and inference by $13.48 imes$.