🤖 AI Summary
This work addresses the failure of manifold learning in non-Euclidean metric spaces—such as the Wasserstein space—by proposing the first pointwise convergence theory for graph Laplacians applicable to general metric spaces. Methodologically, it introduces two novel conditions—geodesic approximability and metric regularity—and integrates metric geometry with spectral graph theory to construct a kernel-density-weighted graph model. The authors rigorously establish uniform convergence of this discrete operator to a weighted Laplacian operator defined intrinsically on the underlying metric space. The key contribution is the removal of the classical Euclidean assumption, thereby providing the first rigorous theoretical foundation for geometry-preserving dimensionality reduction under non-Euclidean distances—including Wasserstein distance. This significantly extends the applicability of manifold learning and lays a principled groundwork for structured representation of non-Euclidean data such as probability distributions and shapes.
📝 Abstract
Laplacian-based methods are popular for dimensionality reduction of data lying in $mathbb{R}^N$. Several theoretical results for these algorithms depend on the fact that the Euclidean distance approximates the geodesic distance on the underlying submanifold which the data are assumed to lie on. However, for some applications, other metrics, such as the Wasserstein distance, may provide a more appropriate notion of distance than the Euclidean distance. We provide a framework that generalizes the problem of manifold learning to metric spaces and study when a metric satisfies sufficient conditions for the pointwise convergence of the graph Laplacian.