🤖 AI Summary
This work addresses the sensitivity of machine learning features to small input perturbations by proposing a neural network framework that jointly incorporates topological awareness and Lipschitz stability. Methodologically, it pioneers the deep integration of persistent diagram embeddings with Lipschitz-constrained neural networks, enabling learnable, discriminative geometric representations while rigorously bounding the model’s Lipschitz constant—thereby providing provable ε-robustness certification for each input sample. The key contribution lies in unifying the structural expressiveness of topological data analysis with the theoretical stability guarantees of Lipschitz neural networks, overcoming the long-standing limitations of non-differentiability and optimization difficulty inherent in conventional topological features. Experiments on the ORBIT5K dynamical-system trajectory dataset demonstrate sample-level robustness verification, confirming strong alignment between theoretical stability bounds and empirical robust performance.
📝 Abstract
We propose a neural network architecture that can learn discriminative geometric representations of data from persistence diagrams, common descriptors of Topological Data Analysis. The learned representations enjoy Lipschitz stability with a controllable Lipschitz constant. In adversarial learning, this stability can be used to certify $epsilon$-robustness for samples in a dataset, which we demonstrate on the ORBIT5K dataset representing the orbits of a discrete dynamical system.