Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of out-of-distribution (OOD) detection under adversarial attacks, this paper proposes the first embedding learning framework integrating Lyapunov stability theory with neural ordinary differential equations (NODEs). The method constructs stable equilibrium points for in-distribution (ID) and OOD samples to achieve robust separation, enabling high-quality pseudo-OOD embedding generation without access to real OOD data. An orthogonal binary layer is introduced to maximize the margin between the two equilibrium manifolds, while adversarial training further enhances the stability of the learned embedding space. Evaluated on bidirectional OOD detection tasks using CIFAR-10 and CIFAR-100, the approach improves adversarial detection rates from 37.8% to 80.1% and from 29.0% to 67.0%, respectively—substantially outperforming existing methods. Its core innovation lies in the first systematic incorporation of Lyapunov stability theory into OOD detection, establishing a provably stable embedding paradigm for trustworthy machine learning.

Technology Category

Application Category

📝 Abstract
Despite significant advancements in out-of-distribution (OOD) detection, existing methods still struggle to maintain robustness against adversarial attacks, compromising their reliability in critical real-world applications. Previous studies have attempted to address this challenge by exposing detectors to auxiliary OOD datasets alongside adversarial training. However, the increased data complexity inherent in adversarial training, and the myriad of ways that OOD samples can arise during testing, often prevent these approaches from establishing robust decision boundaries. To address these limitations, we propose AROS, a novel approach leveraging neural ordinary differential equations (NODEs) with Lyapunov stability theorem in order to obtain robust embeddings for OOD detection. By incorporating a tailored loss function, we apply Lyapunov stability theory to ensure that both in-distribution (ID) and OOD data converge to stable equilibrium points within the dynamical system. This approach encourages any perturbed input to return to its stable equilibrium, thereby enhancing the model's robustness against adversarial perturbations. To not use additional data, we generate fake OOD embeddings by sampling from low-likelihood regions of the ID data feature space, approximating the boundaries where OOD data are likely to reside. To then further enhance robustness, we propose the use of an orthogonal binary layer following the stable feature space, which maximizes the separation between the equilibrium points of ID and OOD samples. We validate our method through extensive experiments across several benchmarks, demonstrating superior performance, particularly under adversarial attacks. Notably, our approach improves robust detection performance from 37.8% to 80.1% on CIFAR-10 vs. CIFAR-100 and from 29.0% to 67.0% on CIFAR-100 vs. CIFAR-10.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Attacks
Unknown Class Recognition
Machine Learning Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Ordinary Differential Equations
Liapunov Stability Theorem
Adversarial Robustness