🤖 AI Summary
This study addresses key bottlenecks in deep graph learning and network science—namely, limited interpretability, inadequate higher-order structural modeling, absence of standardized evaluation protocols, and inefficiency in large-scale graph processing. We propose the first systematic integration framework that embeds interpretable network science metrics—such as centrality, community structure, and higher-order clustering—into graph neural network architectures, coupled with gradient-based optimization and a standardized evaluation protocol. This yields a higher-order topological-aware learning paradigm and a scalable benchmarking suite. Our core contributions are: (1) synergistic integration of hypothesis-driven and data-driven paradigms; (2) a novel modeling framework balancing interpretability and expressive power; and (3) identification and formalization of four cross-disciplinary research opportunities, providing theoretical foundations and practical pathways toward robust, interpretable, and scalable next-generation graph learning.
📝 Abstract
Deep graph learning and network science both analyze graphs but approach similar problems from different perspectives. Whereas network science focuses on models and measures that reveal the organizational principles of complex systems with explicit assumptions, deep graph learning focuses on flexible and generalizable models that learn patterns in graph data in an automated fashion. Despite these differences, both fields share the same goal: to better model and understand patterns in graph-structured data. Early efforts to integrate methods, models, and measures from network science and deep graph learning indicate significant untapped potential. In this position, we explore opportunities at their intersection. We discuss open challenges in deep graph learning, including data augmentation, improved evaluation practices, higher-order models, and pooling methods. Likewise, we highlight challenges in network science, including scaling to massive graphs, integrating continuous gradient-based optimization, and developing standardized benchmarks.