🤖 AI Summary
This work addresses the challenge of verifying Lipschitz constants in conventional neural networks, which typically relies on computationally expensive methods or overly loose trivial bounds that fail to guarantee adversarial robustness and generalization. The authors propose a novel “verification-by-training” paradigm that integrates structural design to directly optimize and tighten trivial Lipschitz bounds during training, thereby circumventing complex post-hoc verification. Key innovations include norm-saturating polynomial activations (polyactivations), unbiased sinusoidal layers, and extensions to non-Euclidean norms, collectively eliminating three major sources of bound looseness. On MNIST, the resulting networks achieve Lipschitz bounds several orders of magnitude lower than existing approaches, with less than 10% error relative to the true Lipschitz constant, significantly enhancing both robustness and generalization performance.
📝 Abstract
The global Lipschitz constant of a neural network governs both adversarial robustness and generalization.
Conventional approaches to ``certified training" typically follow a train-then-verify paradigm: they train a network and then attempt to bound its Lipschitz constant.
Because the efficient ``trivial bound" (the product of the layerwise Lipschitz constants) is exponentially loose for arbitrary networks, these approaches must rely on computationally expensive techniques such as semidefinite programming, mixed-integer programming, or branch-and-bound.
We propose a different paradigm: rather than designing complex verifiers for arbitrary networks, we design networks to be verifiable by the fast trivial bound.
We show that directly penalizing the trivial bound during training forces it to become tight, thereby effectively regularizing the true Lipschitz constant.
To achieve this, we identify three structural obstructions to a tight trivial bound (dead neurons, bias terms, and ill-conditioned weights) and introduce architectural mitigations, including a novel notion of norm-saturating polyactivations and bias-free sinusoidal layers.
Our approach avoids the runtime complexity of advanced verification while achieving strong results: we train robust networks on MNIST with Lipschitz bounds that are small (orders of magnitude lower than comparable works) and tight (within 10% of the ground truth).
The experimental results validate the theoretical guarantees, support the proposed mechanisms, and extend empirically to diverse activations and non-Euclidean norms.