🤖 AI Summary
The high-performance computing (HPC) domain suffers from an abundance of benchmarking tools and the absence of a standardized, unified classification framework. Method: This paper proposes the first standardized benchmark taxonomy for HPC, derived from a systematic literature review and multi-dimensional feature analysis across hardware, software, and algorithmic layers. A structured classification model is constructed, with key attributes—including target workload, portability, scalability, and measurement granularity—concisely tabulated. An interactive web-based platform is further developed to enable dynamic, dimension-driven querying, cross-benchmark comparison, and visual analytics. Contribution/Results: The taxonomy systematically organizes over 100 mainstream HPC benchmarks, significantly enhancing efficiency and consistency for architects, researchers, and scientific users in system evaluation, benchmark selection, and performance optimization. It establishes a foundational framework for standardizing HPC performance assessment and facilitates reproducible, comparable, and interpretable benchmarking practices.
📝 Abstract
The field of High-Performance Computing (HPC) is defined by providing computing devices with highest performance for a variety of demanding scientific users. The tight co-design relationship between HPC providers and users propels the field forward, paired with technological improvements, achieving continuously higher performance and resource utilization. A key device for system architects, architecture researchers, and scientific users are benchmarks, allowing for well-defined assessment of hardware, software, and algorithms. Many benchmarks exist in the community, from individual niche benchmarks testing specific features, to large-scale benchmark suites for whole procurements. We survey the available HPC benchmarks, summarizing them in table form with key details and concise categorization, also through an interactive website. For categorization, we present a benchmark taxonomy for well-defined characterization of benchmarks.