Riemannian generative decoder

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Riemannian representation learning methods suffer from manifold specificity, optimization challenges, and geometric constraints imposed by encoder-based architectures, hindering simultaneous generality and interpretability. This paper proposes an encoder-free Riemannian generative decoding framework: a decoder network is jointly trained via Riemannian optimization to directly maximize the likelihood of latent variables on the target manifold, yielding a geometrically aligned and interpretable latent space. By eliminating reliance on manifold-specific designs and explicit geometric regularizers, our approach is architecture-agnostic—compatible with standard neural backbones—and supports modeling diverse non-Euclidean structures, including synthetic diffusion processes, mitochondrial DNA migration, and cell-cycle dynamics. Evaluated on three real-world scenarios, the learned representations accurately capture intrinsic non-Euclidean geometry, exhibit improved training stability, and deliver significantly enhanced interpretability.

Technology Category

Application Category

📝 Abstract
Riemannian representation learning typically relies on approximating densities on chosen manifolds. This involves optimizing difficult objectives, potentially harming models. To completely circumvent this issue, we introduce the Riemannian generative decoder which finds manifold-valued maximum likelihood latents with a Riemannian optimizer while training a decoder network. By discarding the encoder, we vastly simplify the manifold constraint compared to current approaches which often only handle few specific manifolds. We validate our approach on three case studies -- a synthetic branching diffusion process, human migrations inferred from mitochondrial DNA, and cells undergoing a cell division cycle -- each showing that learned representations respect the prescribed geometry and capture intrinsic non-Euclidean structure. Our method requires only a decoder, is compatible with existing architectures, and yields interpretable latent spaces aligned with data geometry.
Problem

Research questions and friction points this paper is trying to address.

Avoids approximating densities on Riemannian manifolds
Simplifies manifold constraints by discarding encoder
Captures intrinsic non-Euclidean structure in data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian generative decoder for manifold-valued latents
Discards encoder to simplify manifold constraints
Compatible with existing architectures and interpretable
🔎 Similar Papers
No similar papers found.