Rule Learning for Knowledge Graph Reasoning under Agnostic Distribution Shift

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the violation of the independent and identically distributed (i.i.d.) assumption in knowledge graph (KG) reasoning—caused by unknown selection bias during training and distributional shifts during testing—and formally defines the “out-of-distribution (OOD) KG reasoning” task for the first time. To enhance model generalization and robustness under non-i.i.d. conditions, we propose StableRule, a stable rule learning framework that decouples spurious correlations via feature decorrelation, thereby separating logical rule learning from distribution shifts; it further introduces an end-to-end differentiable architecture that jointly optimizes rule discovery and invariant representation learning. Extensive experiments across seven heterogeneous KG benchmarks demonstrate that StableRule significantly improves OOD generalization performance and inference stability. Our work establishes a novel paradigm for KG reasoning in open-world, distributionally dynamic environments.

Technology Category

Application Category

📝 Abstract
Knowledge graph (KG) reasoning remains a critical research area focused on inferring missing knowledge by analyzing relationships among observed facts. Despite its success, a key limitation of existing KG reasoning methods is their dependence on the I.I.D assumption. This assumption can easily be violated due to unknown sample selection bias during training or agnostic distribution shifts during testing, significantly compromising model performance and reliability. To facilitate the deployment of KG reasoning in wild environments, this study investigates learning logical rules from KGs affected by unknown selection bias. Additionally, we address test sets with agnostic distribution shifts, formally defining this challenge as out-of-distribution (OOD) KG reasoning-a previously underexplored problem. To solve the issue, we propose the Stable Rule Learning (StableRule) framework, an end-to-end methodology that integrates feature decorrelation with rule learning network, to enhance OOD generalization performance. By leveraging feature decorrelation, the StableRule framework mitigates the adverse effects of covariate shifts arising in OOD scenarios, thereby improving the robustness of the rule learning component in effectively deriving logical rules. Extensive experiments on seven benchmark KGs demonstrate the framework's superior effectiveness and stability across diverse heterogeneous environments, underscoring its practical significance for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Addressing KG reasoning under unknown selection bias
Solving OOD KG reasoning with agnostic distribution shifts
Enhancing robustness in rule learning for KG reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature decorrelation for stable rule learning
End-to-end OOD KG reasoning framework
Robust logical rule derivation under shift
🔎 Similar Papers
No similar papers found.
Shixuan Liu
Shixuan Liu
National University of Defense Technology
Knowledge ReasoningDomain GeneralizationCausal InferenceData Engineering
Yue He
Yue He
Tsinghua University
causal inference
Yunfei Wang
Yunfei Wang
Lawrence Berkeley National Lab; University of Southern Mississippi
H
Hao Zou
Department of Computer Science and Technology, Tsinghua University, Beijing, China
H
Haoxiang Cheng
Laboratory for Big Data and Decision, National University of Defense Technology, Hunan, China
W
Wenjing Yang
Department of Intelligent Data Science, College of Computer Science and Technology, National University of Defense Technology, Hunan, China
P
Peng Cui
Department of Computer Science and Technology, Tsinghua University, Beijing, China
Zhong Liu
Zhong Liu
Laboratory for Big Data and Decision, National University of Defense Technology, Hunan, China