MFC 5.0: An exascale many-physics flow solver

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational challenges of multiphysics flow simulations in engineering, medicine, and fundamental sciences, this work introduces MFC 5.0—a high-performance CFD solver. Methodologically, it unifies support for immersed boundary methods, multiphase flow with phase change, fluid–structure interaction, and reactive flows; features a novel Pyrometheus-driven thermochemical automatic code generation framework; and integrates advanced numerical techniques including relaxed characteristic boundary conditions, Strang splitting, and low-Mach-number formulations. Architecturally, it achieves end-to-end scalability—from single GPU/APU nodes to exascale systems (e.g., Frontier and El Capitan)—leveraging WENO schemes, Euler–Euler/Lagrangian subgrid modeling, and heterogeneous acceleration. Strong and weak scaling efficiencies exceed 90%, with measured performance reaching exaFLOPS-level throughput. These advances significantly enable large-scale simulations critical to aerospace, energy, and biomedical applications.

Technology Category

Application Category

📝 Abstract
Engineering, medicine, and the fundamental sciences broadly rely on flow simulations, making performant computational fluid dynamics solvers an open source software mainstay. A previous work made MFC 3.0 a published open source source solver with many features. MFC 5.0 is a marked update to MFC 3.0, including a broad set of well-established and novel physical models and numerical methods and the introduction of GPU and APU (or superchip) acceleration. We exhibit state-of-the-art performance and ideal scaling on the first two exascale supercomputers, OLCF Frontier and LLNL El Capitan. Combined with MFC's single-GPU/APU performance, MFC achieves exascale computation in practice. With these capabilities, MFC has evolved into a tool for conducting simulations that many engineering challenge problems hinge upon. New physical features include the immersed boundary method, $N$-fluid phase change, Euler--Euler and Euler--Lagrange sub-grid bubble models, fluid-structure interaction, hypo- and hyper-elastic materials, chemically reacting flow, two-material surface tension, and more. Numerical techniques now represent the current state-of-the-art, including general relaxation characteristic boundary conditions, WENO variants, Strang splitting for stiff sub-grid flow features, and low Mach number treatments. Weak scaling to tens of thousands of GPUs on OLCF Frontier and LLNL El Capitan see efficiencies within 5% of ideal to over 90% of their respective system sizes. Strong scaling results for a 16-time increase in device count show parallel efficiencies over 90% on OLCF Frontier. Other MFC improvements include ensuring code resilience and correctness with a continuous integration suite, the use of metaprogramming to reduce code length and maintain performance portability, and efficient computational representations for chemical reactions and thermodynamics via code generation with Pyrometheus.
Problem

Research questions and friction points this paper is trying to address.

Develops exascale many-physics flow solver for engineering and science.
Enhances MFC 3.0 with advanced physical models and GPU/APU acceleration.
Achieves state-of-the-art performance on exascale supercomputers like Frontier.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU and APU acceleration for exascale computation
Advanced physical models like immersed boundary method
State-of-the-art numerical techniques including WENO variants
🔎 Similar Papers
No similar papers found.
B
Benjamin Wilfong
School of Computational Science & Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
H
Henry A. Le Berre
School of Computational Science & Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
Anand Radhakrishnan
Anand Radhakrishnan
Univ of Florida
Particle FiltersState estimationMarkov ProcessesMCMC algorithmsMachine learning
D
Diego Vaca-Revelo
Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA 01609, USA
D
Dimitrios Adam
School of Computational Science & Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
H
Haocheng Yu
Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
J
J. R. Chreim
Department of Mechanical and Civil Engineering, California Institute of Technology, Pasadena, CA 91125, USA
M
Mirelys Carcana Barbosa
School of Engineering, Brown University, Providence, RI 02912, USA
Yanjun Zhang
Yanjun Zhang
Lecturer, University of Technology Sydney
Security and PrivacyMachine Learning
E
Esteban Cisneros-Garibay
Mechanical Science & Engineering, University of Illinois at Urbana–Champaign, Urbana, IL 61820, USA
A
A. Gnanaskandan
Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA 01609, USA
M
Mauro Rodriguez
School of Engineering, Brown University, Providence, RI 02912, USA
R
R. Budiardja
Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
S
Steve Abbott
Hewlett Packard Enterprise, Bloomington, MN 55435, USA
T
T. Colonius
Department of Mechanical and Civil Engineering, California Institute of Technology, Pasadena, CA 91125, USA
Spencer H. Bryngelson
Spencer H. Bryngelson
Georgia Tech
computational fluid dynamicsscientific computing