Learning second-order TVD flux limiters using differentiable solvers

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the empirical design and poor generalizability of flux limiters in numerical simulations of hyperbolic conservation laws. We propose the first differentiable, TVD-guaranteed, data-driven framework for flux limiting. Methodologically, we parameterize a second-order flux limiter as a convex combination of Minmod and Superbee, model its weight via a lightweight neural network, and embed it within a differentiable finite-volume solver for end-to-end optimization—explicitly enforcing both the TVD condition and second-order accuracy throughout training. Contributions include: (1) the first neural parameterization and differentiable training of flux limiters; (2) superior performance over most classical limiters on shock/discontinuity problems governed by Burgers’ equation and the 1D Euler equations—even when trained exclusively on the linear advection equation; and (3) plug-and-play compatibility with existing CFD codes, delivering significant gains in both accuracy and robustness.

Technology Category

Application Category

📝 Abstract
This paper presents a data-driven framework for learning optimal second-order total variation diminishing (TVD) flux limiters via differentiable simulations. In our fully differentiable finite volume solvers, the limiter functions are replaced by neural networks. By representing the limiter as a pointwise convex linear combination of the Minmod and Superbee limiters, we enforce both second-order accuracy and TVD constraints at all stages of training. Our approach leverages gradient-based optimization through automatic differentiation, allowing a direct backpropagation of errors from numerical solutions to the limiter parameters. We demonstrate the effectiveness of this method on various hyperbolic conservation laws, including the linear advection equation, the Burgers' equation, and the one-dimensional Euler equations. Remarkably, a limiter trained solely on linear advection exhibits strong generalizability, surpassing the accuracy of most classical flux limiters across a range of problems with shocks and discontinuities. The learned flux limiters can be readily integrated into existing computational fluid dynamics codes, and the proposed methodology also offers a flexible pathway to systematically develop and optimize flux limiters for complex flow problems.
Problem

Research questions and friction points this paper is trying to address.

Learning optimal second-order TVD flux limiters using neural networks.
Enforcing second-order accuracy and TVD constraints via differentiable simulations.
Generalizing learned limiters across hyperbolic conservation laws and shocks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural networks replace traditional flux limiters
Gradient-based optimization via automatic differentiation
Trained limiters generalize across multiple equations
🔎 Similar Papers
No similar papers found.