🤖 AI Summary
Existing interpolation methods for generative models lack a universal, principle-driven definition—often relying on strong assumptions or requiring architectural modifications. Method: We propose a general interpolation framework that models interpolation paths as Riemannian geodesics constrained by the data distribution: leveraging gradient information of the probability density, it computes high-density, quasi-geodesic paths directly in the pre-trained model’s latent space, without additional training or fine-tuning. Contribution/Results: Theoretically, the path satisfies the geodesic equation locally under a Riemannian metric induced by the data density. Algorithmically, the method is agnostic to the choice of distance metric and data distribution. Empirically, it significantly improves interpolation smoothness and semantic plausibility across diverse generative models—including VAEs, GANs, and diffusion models—and on both image and text datasets, outperforming state-of-the-art baselines.
📝 Abstract
Interpolation in generative models allows for controlled generation, model inspection, and more. Unfortunately, most generative models lack a principal notion of interpolants without restrictive assumptions on either the model or data dimension. In this paper, we develop a general interpolation scheme that targets likely transition paths compatible with different metrics and probability distributions. We consider interpolants analogous to a geodesic constrained to a suitable data distribution and derive a novel algorithm for computing these curves, which requires no additional training. Theoretically, we show that our method locally can be considered as a geodesic under a suitable Riemannian metric. We quantitatively show that our interpolation scheme traverses higher density regions than baselines across a range of models and datasets.