🤖 AI Summary
To address uncertainty quantification (UQ) for expensive, gradient-free mathematical models, this paper proposes Neural Active Manifolds (NeurAM): a method that employs an autoencoder to learn a one-dimensional nonlinear active manifold—capturing output variability without gradients—and concurrently constructs an efficient surrogate on this manifold to support multi-query tasks such as sensitivity analysis and uncertainty propagation. Its key innovation lies in the first formulation of a shared low-dimensional manifold enabling variance reduction across multi-fidelity sampling, rigorously validated through theoretical analysis and numerical experiments demonstrating cross-model manifold consistency. Across multiple benchmark problems, NeurAM consistently outperforms existing dimensionality-reduction-based UQ methods, achieving both higher accuracy and improved computational efficiency. By unifying manifold learning and surrogate modeling in a gradient-free framework, NeurAM establishes a scalable, general-purpose paradigm for UQ of black-box models.
📝 Abstract
We present a new approach for nonlinear dimensionality reduction, specifically designed for computationally expensive mathematical models. We leverage autoencoders to discover a one-dimensional neural active manifold (NeurAM) capturing the model output variability, plus a simultaneously learnt surrogate model with inputs on this manifold. The proposed dimensionality reduction framework can then be applied to perform outer loop many-query tasks, like sensitivity analysis and uncertainty propagation. In particular, we prove, both theoretically under idealized conditions, and numerically in challenging test cases, how NeurAM can be used to obtain multifidelity sampling estimators with reduced variance by sampling the models on the discovered low-dimensional and shared manifold among models. Several numerical examples illustrate the main features of the proposed dimensionality reduction strategy, and highlight its advantages with respect to existing approaches in the literature.