Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation

📅 2025-05-05
🏛️ International Workshop on the Semantic Web
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination, knowledge obsolescence, and poor domain adaptability of large language models (LLMs) in entity disambiguation, this paper proposes a zero-shot knowledge graph (KG)-enhanced approach. It dynamically prunes candidate entities using the semantic hierarchy of entity types in the KG and injects structured factual knowledge into prompts to guide LLM reasoning. The method requires no fine-tuning or retraining, and—uniquely—integrates KG hierarchical semantics with prompt engineering to yield a transferable, highly adaptive zero-shot framework. Experiments on mainstream benchmarks demonstrate substantial improvements over baseline LLMs and description-only augmentation methods; performance matches or exceeds that of supervised, task-specific models. Error analysis confirms that the richness and granularity of KG semantic hierarchies are critical drivers of accuracy gains.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have positioned them as a prominent solution for Natural Language Processing tasks. Notably, they can approach these problems in a zero or few-shot manner, thereby eliminating the need for training or fine-tuning task-specific models. However, LLMs face some challenges, including hallucination and the presence of outdated knowledge or missing information from specific domains in the training data. These problems cannot be easily solved by retraining the models with new data as it is a time-consuming and expensive process. To mitigate these issues, Knowledge Graphs (KGs) have been proposed as a structured external source of information to enrich LLMs. With this idea, in this work we use KGs to enhance LLMs for zero-shot Entity Disambiguation (ED). For that purpose, we leverage the hierarchical representation of the entities' classes in a KG to gradually prune the candidate space as well as the entities' descriptions to enrich the input prompt with additional factual knowledge. Our evaluation on popular ED datasets shows that the proposed method outperforms non-enhanced and description-only enhanced LLMs, and has a higher degree of adaptability than task-specific models. Furthermore, we conduct an error analysis and discuss the impact of the leveraged KG's semantic expressivity on the ED performance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs with KGs for entity disambiguation
Addressing LLM challenges like hallucination and outdated knowledge
Using KG hierarchies to prune candidate space effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Knowledge Graphs to enrich LLMs
Hierarchical class pruning for candidate space
Enhancing prompts with entities' descriptions
🔎 Similar Papers
No similar papers found.