Differentiable Programming for Differential Equations: A Review

📅 2024-06-14
🏛️ arXiv.org
📈 Citations: 18
Influential: 3
📄 PDF
🤖 AI Summary
This work addresses the efficient and robust computation of gradients for numerical solutions of differential equations. We systematically survey four differentiable programming paradigms—adjoint methods, automatic differentiation (via source-to-source transformation and operator overloading), numerical perturbation, and symbolic-numeric hybrid approaches—and introduce, for the first time, a unified differentiability framework that bridges inverse problem solving and machine learning methodologies. We establish a cross-method comparative taxonomy and provide platform-specific best-practice guidelines for scientific computing libraries including SciPy, JAX, and TorchDiffeq. Our analysis rigorously characterizes trade-offs among accuracy, memory footprint, computational complexity, and applicability domains for each method. The results deliver both theoretical foundations and practical implementation pathways for differential-equation–data fusion modeling tasks, including parameter inversion, sensitivity analysis, and physics-informed neural networks (PINNs).

Technology Category

Application Category

📝 Abstract
The differentiable programming paradigm is a cornerstone of modern scientific computing. It refers to numerical methods for computing the gradient of a numerical model's output. Many scientific models are based on differential equations, where differentiable programming plays a crucial role in calculating model sensitivities, inverting model parameters, and training hybrid models that combine differential equations with data-driven approaches. Furthermore, recognizing the strong synergies between inverse methods and machine learning offers the opportunity to establish a coherent framework applicable to both fields. Differentiating functions based on the numerical solution of differential equations is non-trivial. Numerous methods based on a wide variety of paradigms have been proposed in the literature, each with pros and cons specific to the type of problem investigated. Here, we provide a comprehensive review of existing techniques to compute derivatives of numerical solutions of differential equations. We first discuss the importance of gradients of solutions of differential equations in a variety of scientific domains. Second, we lay out the mathematical foundations of the various approaches and compare them with each other. Third, we cover the computational considerations and explore the solutions available in modern scientific software. Last but not least, we provide best-practices and recommendations for practitioners. We hope that this work accelerates the fusion of scientific models and data, and fosters a modern approach to scientific modelling.
Problem

Research questions and friction points this paper is trying to address.

Reviewing gradient computation methods for differential equation solutions
Comparing mathematical approaches for differentiating numerical differential equations
Providing best practices for differentiable programming in scientific computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable programming computes gradients of differential equations
Reviews methods for derivative computation in scientific models
Combines inverse methods with machine learning frameworks
🔎 Similar Papers
No similar papers found.
Facundo Sapienza
Facundo Sapienza
Department of Statistics, University of California, Berkeley, USA
Jordi Bolibar
Jordi Bolibar
Univ. Grenoble Alpes, CNRS, IRD, G-INP, Institut des Géosciences de l’Environnement, Grenoble, France; TU Delft, Department of Geosciences and Civil Engineering, Delft, Netherlands
Frank Schäfer
Frank Schäfer
CSAIL, Massachusetts Institute of Technology, Cambridge, USA
B
Brian Groenke
TU Berlin, Department of Electrical and Computer Engineering, Berlin, Germany; Helmholtz Centre for Environmental Research, Leipzig, Germany
A
Avik Pal
CSAIL, Massachusetts Institute of Technology, Cambridge, USA
V
Victor Boussange
Swiss Federal Research Institute WSL, Birmensdorf, Switzerland
P
Patrick Heimbach
Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, USA; Jackson School of Geosciences, University of Texas at Austin, USA
Giles Hooker
Giles Hooker
Professor of Statistics and Data Science, University of Pennsylvania
StatisticsMachine LearningDynamical Systems
F
Fernando Pérez
Department of Statistics, University of California, Berkeley, USA
Per-Olof Persson
Per-Olof Persson
University of California, Berkeley
Computational fluid and solid mechanicshigh-order discontinuous Galerkin methodsfluid-structure interactionunstructured me
C
Christopher Rackauckas
Massachusetts Institute of Technology, Cambridge, USA; JuliaHub, Cambridge, USA