Towards Scalable Topological Regularizers

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional feature discrepancy measures for latent-space distribution matching are computationally expensive and neglect geometric-topological structure; meanwhile, persistent homology-based methods suffer from poor scalability and training instability. Method: This paper proposes a scalable topological regularization framework centered on a novel lightweight topological regularizer based on subsampled persistent homology. Contribution/Results: We theoretically prove that the regularizer’s gradient is continuous with respect to input density—ensuring stable backpropagation—and enable efficient GPU-accelerated computation, overcoming topological regularization bottlenecks for point clouds exceeding one thousand points. Extensive experiments on shape matching, image generation, and semi-supervised learning demonstrate significant improvements in training stability and scalability to large-scale settings.

Technology Category

Application Category

📝 Abstract
Latent space matching, which consists of matching distributions of features in latent space, is a crucial component for tasks such as adversarial attacks and defenses, domain adaptation, and generative modelling. Metrics for probability measures, such as Wasserstein and maximum mean discrepancy, are commonly used to quantify the differences between such distributions. However, these are often costly to compute, or do not appropriately take the geometric and topological features of the distributions into consideration. Persistent homology is a tool from topological data analysis which quantifies the multi-scale topological structure of point clouds, and has recently been used as a topological regularizer in learning tasks. However, computation costs preclude larger scale computations, and discontinuities in the gradient lead to unstable training behavior such as in adversarial tasks. We propose the use of principal persistence measures, based on computing the persistent homology of a large number of small subsamples, as a topological regularizer. We provide a parallelized GPU implementation of this regularizer, and prove that gradients are continuous for smooth densities. Furthermore, we demonstrate the efficacy of this regularizer on shape matching, image generation, and semi-supervised learning tasks, opening the door towards a scalable regularizer for topological features.
Problem

Research questions and friction points this paper is trying to address.

Feature Difference Measurement
Computational Efficiency
Persistent Homology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persistent Homology
GPU Parallel Computing
Shape Feature Control
🔎 Similar Papers
No similar papers found.
H
Hiu-Tung Wong
CIMDA, City University of Hong Kong
Darrick Lee
Darrick Lee
Chancellor's Fellow, University of Edinburgh
applied algebraic topologysignature methodsgeometrycategory theory
H
Hong Yan
CIMDA, City University of Hong Kong