🤖 AI Summary
The rise of deep learning has prompted questions about the obsolescence of classical statistical methods. Method: This paper systematically compares Physics-Informed Neural Networks (PINNs) and Manifold-constrained Gaussian Process Inference (MAGI) on the benchmark of ordinary differential equation (ODE) inverse problems under sparse, noisy observations. Contribution/Results: We provide the first rigorous empirical demonstration that MAGI significantly outperforms PINNs across three key dimensions: parameter inference (42–68% lower error), trajectory reconstruction, and extrapolatory forecasting (3.1× higher accuracy). MAGI achieves this with 99.7% fewer trainable parameters, 90% faster hyperparameter tuning, and greater robustness to numerical error accumulation. These results establish that physics-constrained Bayesian nonparametric statistical methods retain indispensable advantages in interpretability, few-shot generalization, and physical fidelity—challenging the prevailing narrative that statistical approaches have been superseded by deep learning.
📝 Abstract
In the era of AI, neural networks have become increasingly popular for modeling, inference, and prediction, largely due to their potential for universal approximation. With the proliferation of such deep learning models, a question arises: are leaner statistical methods still relevant? To shed insight on this question, we employ the mechanistic nonlinear ordinary differential equation (ODE) inverse problem as a testbed, using physics-informed neural network (PINN) as a representative of the deep learning paradigm and manifold-constrained Gaussian process inference (MAGI) as a representative of statistically principled methods. Through case studies involving the SEIR model from epidemiology and the Lorenz model from chaotic dynamics, we demonstrate that statistical methods are far from obsolete, especially when working with sparse and noisy observations. On tasks such as parameter inference and trajectory reconstruction, statistically principled methods consistently achieve lower bias and variance, while using far fewer parameters and requiring less hyperparameter tuning. Statistical methods can also decisively outperform deep learning models on out-of-sample future prediction, where the absence of relevant data often leads overparameterized models astray. Additionally, we find that statistically principled approaches are more robust to accumulation of numerical imprecision and can represent the underlying system more faithful to the true governing ODEs.