Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge graph reasoning is constrained by fixed, query-agnostic path exploration strategies, limiting adaptability to diverse semantic contexts. To address this, we propose a personalized reasoning path construction framework featuring two novel hybrid expert mechanisms: a “length expert” that adaptively determines optimal inference depth based on query complexity, and a “pruning expert” that dynamically selects semantically relevant paths via contextual relevance scoring. Our method integrates graph neural networks with a mixture-of-experts architecture to jointly optimize both path length and quality. Evaluated on multiple benchmark datasets, the model achieves significant improvements over state-of-the-art approaches in both transductive and inductive settings. It demonstrates strong generalization capability and fine-grained context sensitivity, enabling robust reasoning across heterogeneous queries without sacrificing efficiency or expressiveness.

Technology Category

Application Category

📝 Abstract
Knowledge Graph (KG) reasoning, which aims to infer new facts from structured knowledge repositories, plays a vital role in Natural Language Processing (NLP) systems. Its effectiveness critically depends on constructing informative and contextually relevant reasoning paths. However, existing graph neural networks (GNNs) often adopt rigid, query-agnostic path-exploration strategies, limiting their ability to adapt to diverse linguistic contexts and semantic nuances. To address these limitations, we propose extbf{MoKGR}, a mixture-of-experts framework that personalizes path exploration through two complementary components: (1) a mixture of length experts that adaptively selects and weights candidate path lengths according to query complexity, providing query-specific reasoning depth; and (2) a mixture of pruning experts that evaluates candidate paths from a complementary perspective, retaining the most informative paths for each query. Through comprehensive experiments on diverse benchmark, MoKGR demonstrates superior performance in both transductive and inductive settings, validating the effectiveness of personalized path exploration in KGs reasoning.
Problem

Research questions and friction points this paper is trying to address.

Adapting path exploration to diverse linguistic contexts in KG reasoning
Overcoming rigid query-agnostic strategies in graph neural networks
Personalizing reasoning depth and path selection for query complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of length experts for adaptive path selection
Mixture of pruning experts for informative path retention
Personalized path exploration for KG reasoning
🔎 Similar Papers
No similar papers found.