Testing and benchmarking emerging supercomputers via the MFC flow solver

📅 2025-09-16
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Emerging exascale supercomputers demand rigorous co-verification of hardware and software stacks, particularly for complex multi-physics applications. Method: This work introduces an automated benchmarking framework built upon the multi-component flow solver MFC, featuring a cross-platform toolchain that enables input generation, automatic compilation across Intel, Cray, NVIDIA, AMD, and GNU compilers, heterogeneous job scheduling, and fine-grained performance profiling—using time-to-solution per grid point per time step as the primary metric—across diverse CPU and five generations of NVIDIA and three generations of AMD GPU architectures. Contribution/Results: The framework lowers barriers to compiler–hardware co-validation while enabling high reuse and joint correctness–performance assessment. It has been deployed on over 50 systems, including five flagship exascale platforms (e.g., Frontier, El Capitan), uncovering multiple previously unreported compiler bugs and performance regressions—providing empirical evidence to enhance reliability and optimization in the HPC ecosystem.

Technology Category

Application Category

📝 Abstract
Deploying new supercomputers requires testing and evaluation via application codes. Portable, user-friendly tools enable evaluation, and the Multicomponent Flow Code (MFC), a computational fluid dynamics (CFD) code, addresses this need. MFC is adorned with a toolchain that automates input generation, compilation, batch job submission, regression testing, and benchmarking. The toolchain design enables users to evaluate compiler-hardware combinations for correctness and performance with limited software engineering experience. As with other PDE solvers, wall time per spatially discretized grid point serves as a figure of merit. We present MFC benchmarking results for five generations of NVIDIA GPUs, three generations of AMD GPUs, and various CPU architectures, utilizing Intel, Cray, NVIDIA, AMD, and GNU compilers. These tests have revealed compiler bugs and regressions on recent machines such as Frontier and El Capitan. MFC has benchmarked approximately 50 compute devices and 5 flagship supercomputers.
Problem

Research questions and friction points this paper is trying to address.

Testing and benchmarking emerging supercomputers via MFC flow solver
Evaluating compiler-hardware combinations for correctness and performance
Automating input generation, compilation, and job submission for CFD
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated toolchain for input generation and testing
Evaluates compiler-hardware combinations for performance
Benchmarks multiple GPU and CPU architectures comprehensively
🔎 Similar Papers
No similar papers found.
B
Benjamin Wilfong
Georgia Institute of Technology, Atlanta, Georgia, USA
Anand Radhakrishnan
Anand Radhakrishnan
Univ of Florida
Particle FiltersState estimationMarkov ProcessesMCMC algorithmsMachine learning
H
Henry A. Le Berre
Georgia Institute of Technology, Atlanta, Georgia, USA
T
Tanush Prathi
Georgia Institute of Technology, Atlanta, Georgia, USA
S
Stephen Abbott
Hewlett Packard Enterprise, Bloomington, Minnesota, USA
Spencer H. Bryngelson
Spencer H. Bryngelson
Georgia Tech
computational fluid dynamicsscientific computing