🤖 AI Summary
This paper addresses the failure of the Euclidean space assumption in semi-supervised learning for high-dimensional data. Methodologically, it extends graph Laplacian learning from finite-dimensional Euclidean spaces to the infinite-dimensional Wasserstein space—its first such formulation. Leveraging the manifold assumption, it rigorously characterizes the Laplace–Beltrami operator on submanifolds of the Wasserstein space and establishes a theoretically consistent classification framework via variational convergence analysis of discrete graph p-Dirichlet energies. Its key contribution lies in the novel integration of Wasserstein geometry, p-Dirichlet energy modeling, and variational convergence theory—overcoming traditional graph-based learning’s reliance on linear or finite-dimensional spaces. Experiments on multiple benchmark datasets demonstrate that the method achieves both theoretical consistency and robust classification performance in high-dimensional settings, offering a new paradigm for semi-supervised learning in non-Euclidean, infinite-dimensional spaces.
📝 Abstract
The manifold hypothesis posits that high-dimensional data typically resides on low-dimensional sub spaces. In this paper, we assume manifold hypothesis to investigate graph-based semi-supervised learning
methods. In particular, we examine Laplace Learning in the Wasserstein space, extending the classical
notion of graph-based semi-supervised learning algorithms from finite-dimensional Euclidean spaces to
an infinite-dimensional setting. To achieve this, we prove variational convergence of a discrete graph p- Dirichlet energy to its continuum counterpart. In addition, we characterize the Laplace-Beltrami operator
on asubmanifold of the Wasserstein space. Finally, we validate the proposed theoretical framework through
numerical experiments conducted on benchmark datasets, demonstrating the consistency of our classification performance in high-dimensional settings.