FUSE: Fast Semi-Supervised Node Embedding Learning via Structural and Label-Aware Optimization

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the practical challenge of missing node features in real-world graph data, this paper proposes an end-to-end node embedding method that jointly optimizes structural preservation, label regularization, and semi-supervised propagation. The method introduces three key innovations: (1) a scalable modularity approximation for efficient unsupervised structural modeling; (2) an intra-class variance minimization constraint to enhance class discriminability; and (3) an attention-weighted random-walk-based label propagation mechanism that integrates topological context with class-aware information. This unified iterative framework ensures high-quality embeddings while significantly reducing computational overhead. Extensive experiments on multiple standard benchmarks demonstrate that the proposed approach achieves or surpasses state-of-the-art classification accuracy, with strong efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Graph-based learning is a cornerstone for analyzing structured data, with node classification as a central task. However, in many real-world graphs, nodes lack informative feature vectors, leaving only neighborhood connectivity and class labels as available signals. In such cases, effective classification hinges on learning node embeddings that capture structural roles and topological context. We introduce a fast semi-supervised embedding framework that jointly optimizes three complementary objectives: (i) unsupervised structure preservation via scalable modularity approximation, (ii) supervised regularization to minimize intra-class variance among labeled nodes, and (iii) semi-supervised propagation that refines unlabeled nodes through random-walk-based label spreading with attention-weighted similarity. These components are unified into a single iterative optimization scheme, yielding high-quality node embeddings. On standard benchmarks, our method consistently achieves classification accuracy at par with or superior to state-of-the-art approaches, while requiring significantly less computational cost.
Problem

Research questions and friction points this paper is trying to address.

Learning node embeddings from structural connectivity and limited labels
Jointly optimizing structure preservation and class-aware regularization
Achieving high accuracy with reduced computational cost in classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes structure preservation and label regularization
Uses semi-supervised propagation with attention-weighted similarity
Unifies three objectives in single iterative optimization scheme
🔎 Similar Papers
No similar papers found.
S
Sujan Chakraborty
School of Data Science, IISER Thiruvananthapuram, India
Rahul Bordoloi
Rahul Bordoloi
Research Assistant, University of Rostock
machine learningdimensionality reductionfunctional datatime series classification
A
Anindya Sengupta
Texas A&M University, USA
Olaf Wolkenhauer
Olaf Wolkenhauer
Professor for Systems Biology and Bioinformatics
Systems TheoryData Science
S
Saptarshi Bej
School of Data Science, IISER Thiruvananthapuram, India