🤖 AI Summary
This work addresses a fundamental geometric limitation in existing attributed graph representation learning methods, which suffer from information loss due to the forced fusion of node attribute manifolds with graph structures residing in incompatible metric spaces. To overcome this, the authors propose a novel variational autoencoder that explicitly disentangles the attribute manifold from the graph geometry. By quantifying the distortion incurred when mapping the attribute manifold into the metric space required by the graph heat kernel, the model converts this distortion into an interpretable structural descriptor. This approach not only uncovers hidden connectivity patterns and anomalies that conventional models fail to capture but also theoretically exposes the inherent limitations of current methodologies. Empirically, it achieves substantial improvements in both graph generation and anomaly detection tasks.
📝 Abstract
The standard approach to representation learning on attributed graphs -- i.e., simultaneously reconstructing node attributes and graph structure -- is geometrically flawed, as it merges two potentially incompatible metric spaces. This forces a destructive alignment that erodes information about the graph's underlying generative process. To recover this lost signal, we introduce a custom variational autoencoder that separates manifold learning from structural alignment. By quantifying the metric distortion needed to map the attribute manifold onto the graph's Heat Kernel, we transform geometric conflict into an interpretable structural descriptor. Experiments show our method uncovers connectivity patterns and anomalies undetectable by conventional approaches, proving both their theoretical inadequacy and practical limitations.