Do Large Language Models Understand Performance Optimization?

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models’ (LLMs) code performance optimization capabilities in high-performance computing (HPC) environments. We introduce the first HPC-oriented LLM benchmark, covering diverse computational motifs, and design an LLM-driven agent system that collaboratively performs optimization tasks on real HPC applications. Methodologically, we integrate HPC motif modeling with dual-dimensional evaluation—execution time and functional correctness—and contrast results against domain-specific tools (e.g., Intel VTune). Key contributions include: (1) the first systematic assessment of mainstream LLMs (e.g., o1, Claude-3.5, Llama-3.2) on HPC concept comprehension; (2) a scalable, agent-based collaborative optimization framework that transcends static benchmark limitations; and (3) empirical findings showing that while LLMs accurately interpret instructions and handle simple transformations, their error rate escalates under complex control/data flow, yielding only 12% of optimizations that are both functionally correct and performance-accelerating—substantially below traditional HPC tool efficacy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have emerged as powerful tools for software development tasks such as code completion, translation, and optimization. However, their ability to generate efficient and correct code, particularly in complex High-Performance Computing (HPC) contexts, has remained underexplored. To address this gap, this paper presents a comprehensive benchmark suite encompassing multiple critical HPC computational motifs to evaluate the performance of code optimized by state-of-the-art LLMs, including OpenAI o1, Claude-3.5, and Llama-3.2. In addition to analyzing basic computational kernels, we developed an agent system that integrates LLMs to assess their effectiveness in real HPC applications. Our evaluation focused on key criteria such as execution time, correctness, and understanding of HPC-specific concepts. We also compared the results with those achieved using traditional HPC optimization tools. Based on the findings, we recognized the strengths of LLMs in understanding human instructions and performing automated code transformations. However, we also identified significant limitations, including their tendency to generate incorrect code and their challenges in comprehending complex control and data flows in sophisticated HPC code.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' ability to optimize HPC code.
Assess LLMs' performance in real HPC applications.
Compare LLMs with traditional HPC optimization tools.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed benchmark suite for HPC code optimization
Integrated LLMs into agent system for HPC applications
Compared LLM performance with traditional HPC tools
🔎 Similar Papers
No similar papers found.
B
Bowen Cui
George Mason University
T
Tejas Ramesh
George Mason University
O
Oscar Hernandez
Oak Ridge National Laboratory
Keren Zhou
Keren Zhou
George Mason University
Concurrent ProgrammingDistributed SystemParallel ProgrammingMachine Learning