🤖 AI Summary
Graph Edit Distance (GED) computation is NP-hard, and conventional unit-cost assumptions for edit operations are unrealistic and task-agnostic. Method: This paper proposes a context-aware GED learning framework that jointly leverages graph neural networks (GNNs) and interpretable generalized additive models (GAMs). It employs a gradient-driven self-organizing mechanism to learn node- and edge-specific edit costs in an unsupervised or weakly supervised manner—eliminating reliance on ground-truth GED labels. Contribution/Results: The framework enables flexible, heterogeneous cost modeling for node and edge edits while providing human-interpretable cost decomposition. Experiments demonstrate state-of-the-art performance on molecular graph matching and structural pattern discovery tasks. Moreover, it significantly enhances model adaptability across diverse domains and improves transparency through principled interpretability.
📝 Abstract
Graph Edit Distance (GED) is defined as the minimum cost transformation of one graph into another and is a widely adopted metric for measuring the dissimilarity between graphs. The major problem of GED is that its computation is NP-hard, which has in turn led to the development of various approximation methods, including approaches based on neural networks (NN). Most of these NN-based models simplify the problem of GED by assuming unit-cost edit operations, a rather unrealistic constraint in real-world applications. In this work, we present a novel Graph Neural Network framework that approximates GED using both supervised and unsupervised training. In the unsupervised setting, it employs a gradient-only self-organizing mechanism that enables optimization without ground-truth distances. Moreover, a core component of our architecture is the integration of a Generalized Additive Model, which allows the flexible and interpretable learning of context-aware edit costs. Experimental results show that the proposed method achieves similar results as state-of-the-art reference methods, yet significantly improves both adaptability and interpretability. That is, the learned cost function offers insights into complex graph structures, making it particularly valuable in domains such as molecular analysis and structural pattern discovery.