HGNet: Scalable Foundation Model for Automated Knowledge Graph Generation from Scientific Literature

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to scientific knowledge graph construction struggle to recognize long multi-word entities, exhibit limited cross-domain generalization, and often neglect the hierarchical structure of knowledge, resulting in shallow and inconsistent graphs. To address these limitations, this work proposes a two-stage zero-shot framework. In the first stage, Z-NERD accurately identifies multi-word entities through orthogonal semantic decomposition and multi-scale TCQK attention. The second stage introduces HGNet, which incorporates hierarchy-aware message passing and a differentiable hierarchical loss, and—uniquely—models hierarchical abstraction as a continuous attribute in Euclidean space via a continuous abstraction field loss, thereby avoiding complex hyperbolic embeddings. The framework achieves state-of-the-art performance on SciERC, SciER, and the newly introduced SPHERE benchmark, with zero-shot gains of 10.76% and 26.2% in named entity recognition and relation extraction, respectively, and out-of-distribution improvements of 8.08% and 5.99%.

Technology Category

Application Category

📝 Abstract
Automated knowledge graph (KG) construction is essential for navigating the rapidly expanding body of scientific literature. However, existing approaches struggle to recognize long multi-word entities, often fail to generalize across domains, and typically overlook the hierarchical nature of scientific knowledge. While general-purpose large language models (LLMs) offer adaptability, they are computationally expensive and yield inconsistent accuracy on specialized tasks. As a result, current KGs are shallow and inconsistent, limiting their utility for exploration and synthesis. We propose a two-stage framework for scalable, zero-shot scientific KG construction. The first stage, Z-NERD, introduces (i) Orthogonal Semantic Decomposition (OSD), which promotes domain-agnostic entity recognition by isolating semantic "turns" in text, and (ii) a Multi-Scale TCQK attention mechanism that captures coherent multi-word entities through n-gram-aware attention heads. The second stage, HGNet, performs relation extraction with hierarchy-aware message passing, explicitly modeling parent, child, and peer relations. To enforce global consistency, we introduce two complementary objectives: a Differentiable Hierarchy Loss to discourage cycles and shortcut edges, and a Continuum Abstraction Field (CAF) Loss that embeds abstraction levels along a learnable axis in Euclidean space. This is the first approach to formalize hierarchical abstraction as a continuous property within standard Euclidean embeddings, offering a simpler alternative to hyperbolic methods. We release SPHERE (https://github.com/basiralab/SPHERE), a multi-domain benchmark for hierarchical relation extraction. Our framework establishes a new state of the art on SciERC, SciER, and SPHERE, improving NER by 8.08% and RE by 5.99% on out-of-distribution tests. In zero-shot settings, gains reach 10.76% for NER and 26.2% for RE.
Problem

Research questions and friction points this paper is trying to address.

knowledge graph
scientific literature
hierarchical relations
entity recognition
relation extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal Semantic Decomposition
Multi-Scale TCQK Attention
Hierarchy-Aware Message Passing
Differentiable Hierarchy Loss
Continuum Abstraction Field
🔎 Similar Papers
No similar papers found.