🤖 AI Summary
This work investigates whether large language models (LLMs) can predict the performance bottleneck type—computation-bound or bandwidth-bound—of CUDA/OpenMP kernels solely from source code and target GPU specifications, without GPU execution or profiling, thereby framing Roofline model boundary classification as a source-code-level binary prediction task.
Method: We propose and empirically validate a novel paradigm for zero-shot and few-shot Roofline classification using LLMs. We evaluate on the HeCBench benchmark, employing both off-the-shelf inference-only LLMs and fine-tuned variants.
Contribution/Results: Inference-only LLMs achieve 64% accuracy without any profiling data; with fine-tuning and access to measured performance data, accuracy reaches 100%. These results demonstrate that LLMs can serve as lightweight, portable tools for HPC performance pre-assessment, offering a new methodological pathway for performance portability analysis in high-performance computing.
📝 Abstract
Accurate determination of the performance of parallel GPU code typically requires execution-time profiling on target hardware -- an increasingly prohibitive step due to limited access to high-end GPUs. This paper explores whether Large Language Models (LLMs) can offer an alternative approach for GPU performance prediction without relying on hardware. We frame the problem as a roofline classification task: given the source code of a GPU kernel and the hardware specifications of a target GPU, can an LLM predict whether the GPU kernel is compute-bound or bandwidth-bound? For this study, we build a balanced dataset of 340 GPU kernels, obtained from HeCBench benchmark and written in CUDA and OpenMP, along with their ground-truth labels obtained via empirical GPU profiling. We evaluate LLMs across four scenarios: (1) with access to profiling data of the kernel source, (2) zero-shot with source code only, (3) few-shot with code and label pairs, and (4) fine-tuned on a small custom dataset. Our results show that state-of-the-art LLMs have a strong understanding of the Roofline model, achieving 100% classification accuracy when provided with explicit profiling data. We also find that reasoning-capable LLMs significantly outperform standard LLMs in zero- and few-shot settings, achieving up to 64% accuracy on GPU source codes, without profiling information. Lastly, we find that LLM fine-tuning will require much more data than what we currently have available. This work is among the first to use LLMs for source-level roofline performance prediction via classification, and illustrates their potential to guide optimization efforts when runtime profiling is infeasible. Our findings suggest that with better datasets and prompt strategies, LLMs could become practical tools for HPC performance analysis and performance portability.