π€ AI Summary
Existing graph diffusion models embed features of varying curvature into a shared Euclidean latent space, ignoring the intrinsic non-Euclidean manifold structure of graph dataβleading to untapped geometric potential and severe representation entanglement. To address this, we propose GeoMancer, the first unified Riemannian diffusion framework. It (i) replaces the conventional exponential map with an isometry-invariant Riemannian gyroscopic kernel to mitigate numerical instability on manifolds; (ii) introduces a manifold-constrained diffusion process coupled with a self-guided unconditional generation mechanism, ensuring generated samples strictly reside on the target Riemannian manifold; and (iii) jointly models generation and prediction tasks via manifold-decoupled representations that explicitly encode multi-level geometric properties. Extensive experiments on multiple graph generation and prediction benchmarks demonstrate significant improvements over state-of-the-art methods, validating both the effectiveness and necessity of Riemannian geometric modeling for complex graph structures.
π Abstract
Graph diffusion models have made significant progress in learning structured graph data and have demonstrated strong potential for predictive tasks. Existing approaches typically embed node, edge, and graph-level features into a unified latent space, modeling prediction tasks including classification and regression as a form of conditional generation. However, due to the non-Euclidean nature of graph data, features of different curvatures are entangled in the same latent space without releasing their geometric potential. To address this issue, we aim to construt an ideal Riemannian diffusion model to capture distinct manifold signatures of complex graph data and learn their distribution. This goal faces two challenges: numerical instability caused by exponential mapping during the encoding proces and manifold deviation during diffusion generation. To address these challenges, we propose GeoMancer: a novel Riemannian graph diffusion framework for both generation and prediction tasks. To mitigate numerical instability, we replace exponential mapping with an isometric-invariant Riemannian gyrokernel approach and decouple multi-level features onto their respective task-specific manifolds to learn optimal representations. To address manifold deviation, we introduce a manifold-constrained diffusion method and a self-guided strategy for unconditional generation, ensuring that the generated data remains aligned with the manifold signature. Extensive experiments validate the effectiveness of our approach, demonstrating superior performance across a variety of tasks.