NeurAM: nonlinear dimensionality reduction for uncertainty quantification through neural active manifolds

📅 2024-08-07
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address uncertainty quantification (UQ) for expensive, gradient-free mathematical models, this paper proposes Neural Active Manifolds (NeurAM): a method that employs an autoencoder to learn a one-dimensional nonlinear active manifold—capturing output variability without gradients—and concurrently constructs an efficient surrogate on this manifold to support multi-query tasks such as sensitivity analysis and uncertainty propagation. Its key innovation lies in the first formulation of a shared low-dimensional manifold enabling variance reduction across multi-fidelity sampling, rigorously validated through theoretical analysis and numerical experiments demonstrating cross-model manifold consistency. Across multiple benchmark problems, NeurAM consistently outperforms existing dimensionality-reduction-based UQ methods, achieving both higher accuracy and improved computational efficiency. By unifying manifold learning and surrogate modeling in a gradient-free framework, NeurAM establishes a scalable, general-purpose paradigm for UQ of black-box models.

Technology Category

Application Category

📝 Abstract
We present a new approach for nonlinear dimensionality reduction, specifically designed for computationally expensive mathematical models. We leverage autoencoders to discover a one-dimensional neural active manifold (NeurAM) capturing the model output variability, plus a simultaneously learnt surrogate model with inputs on this manifold. The proposed dimensionality reduction framework can then be applied to perform outer loop many-query tasks, like sensitivity analysis and uncertainty propagation. In particular, we prove, both theoretically under idealized conditions, and numerically in challenging test cases, how NeurAM can be used to obtain multifidelity sampling estimators with reduced variance by sampling the models on the discovered low-dimensional and shared manifold among models. Several numerical examples illustrate the main features of the proposed dimensionality reduction strategy, and highlight its advantages with respect to existing approaches in the literature.
Problem

Research questions and friction points this paper is trying to address.

Reducing dimensionality for expensive models without gradient knowledge
Creating low-dimensional manifolds to capture output variability
Enabling efficient uncertainty quantification and sensitivity analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoencoders discover one-dimensional neural active manifold
Learns surrogate model with inputs on this manifold
Enables multifidelity sampling estimators with reduced variance
🔎 Similar Papers
No similar papers found.
A
Andrea Zanoni
Centro di Ricerca Matematica Ennio De Giorgi, Scuola Normale Superiore, Pisa, Italy
G
Gianluca Geraci
Center for Computing Research, Sandia National Laboratories, Albuquerque, NM, USA
Matteo Salvador
Matteo Salvador
Pasteur Labs & ISI
Mathematical ModelingScientific Machine LearningUncertainty QuantificationDigital Twins
A
A. L. Marsden
Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA, USA; Pediatric Cardiology, Stanford University, Stanford, CA, USA; Bioengineering, Stanford University, Stanford, CA, USA
D
D. Schiavazzi
Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA