Prompting Large Language Models with Partial Knowledge for Answering Questions with Unseen Entities

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer significant performance degradation on knowledge graph question answering (KGQA) involving unseen entities, primarily due to incomplete knowledge graphs causing entity linking failures and retrieval of only partially relevant evidence—such as explicit facts, implicit cues, or weakly related triples. Method: We propose a novel “Partially Relevant Knowledge Activation” paradigm within a retrieval-augmented generation (RAG) framework. It constructs context from KG triple variants to encode partial relevance and employs prompt learning to activate LLMs’ latent reasoning capabilities, supported by theoretical analysis and empirical validation. Contribution/Results: We formally define the “unseen-entity KGQA” task for the first time, relaxing RAG’s traditional reliance on complete, accurate knowledge. Evaluated on two KGQA benchmarks, our method effectively suppresses noise interference and achieves substantial accuracy gains over embedding-similarity-based baselines.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) shows impressive performance by supplementing and substituting parametric knowledge in Large Language Models (LLMs). Retrieved knowledge can be divided into three types: explicit answer evidence, implicit answer clue, and insufficient answer context which can be further categorized into totally irrelevant and partially relevant information. Effectively utilizing partially relevant knowledge remains a key challenge for RAG systems, especially in incomplete knowledge base retrieval. Contrary to the conventional view, we propose a new perspective: LLMs can be awakened via partially relevant knowledge already embedded in LLMs. To comprehensively investigate this phenomenon, the triplets located in the gold reasoning path and their variants are used to construct partially relevant knowledge by removing the path that contains the answer. We provide theoretical analysis of the awakening effect in LLMs and support our hypothesis with experiments on two Knowledge Graphs (KGs) Question Answering (QA) datasets. Furthermore, we present a new task, Unseen Entity KGQA, simulating real-world challenges where entity linking fails due to KG incompleteness. Our awakening-based approach demonstrates greater efficacy in practical applications, outperforms traditional methods that rely on embedding-based similarity which are prone to returning noisy information.
Problem

Research questions and friction points this paper is trying to address.

Utilizing partially relevant knowledge in RAG systems for LLMs
Addressing incomplete knowledge base retrieval challenges in QA tasks
Improving performance on Unseen Entity KGQA with awakening-based methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizing partially relevant knowledge in LLMs
Awakening LLMs with partial knowledge
Outperforming traditional embedding-based similarity methods
🔎 Similar Papers
No similar papers found.
Z
Zhichao Yan
School of Computer and Information Technology, Shanxi University, Taiyuan, China
J
Jiapu Wang
Beijing University of Technology, Beijing, China
Jiaoyan Chen
Jiaoyan Chen
Department of Computer Science, University of Manchester
Knowledge GraphOntologyMachine LearningLarge Language Model
Y
Yanyan Wang
School of Computer and Information Technology, Shanxi University, Taiyuan, China
H
Hongye Tan
School of Computer and Information Technology, Shanxi University, Taiyuan, China
Jiye Liang
Jiye Liang
Shanxi University
X
Xiaoli Li
Singapore University of Technology and Design, Singapore
Ru Li
Ru Li
Harbin Institute of Technology
Jeff Z. Pan
Jeff Z. Pan
Professor of Knowledge Computing, University of Edinburgh
Artificial IntelligenceKnowledge Representation and ReasoningKnowledge Based Learning