🤖 AI Summary
This study systematically evaluates the capability of large language models (LLMs) to generate high-performance Java code for real-world performance optimization tasks. Method: Leveraging 65 industrial-grade performance bottlenecks, we employ an automated patch generation pipeline integrating two state-of-the-art LLMs and four prompt engineering strategies to produce optimization patches, rigorously comparing them against baseline implementations and human expert solutions. Contribution/Results: We present the first large-scale, reproducible benchmark for LLM-assisted performance optimization. Empirical results reveal that ~33% of LLM-generated suggestions introduce novel optimization ideas; however, although most patches improve performance, their average speedup is significantly lower than that achieved by human experts (p < 0.01), and novelty does not guarantee substantive gains. The work establishes a foundational evaluation framework and exposes critical limitations of current LLMs in deep, system-level performance optimization—highlighting key directions for future improvement.
📝 Abstract
Large Language Models (LLMs) can generate code, but can they generate fast code? In this paper, we study this question using a dataset of 65 real-world tasks mined from open-source Java programs. We specifically select tasks where developers achieved significant speedups, and employ an automated pipeline to generate patches for these issues using two leading LLMs under four prompt variations. By rigorously benchmarking the results against the baseline and human-authored solutions, we demonstrate that LLM-generated code indeed improves performance over the baseline in most cases. However, patches proposed by human developers outperform LLM fixes by a statistically significant margin, indicating that LLMs often fall short of finding truly optimal solutions. We further find that LLM solutions are semantically identical or similar to the developer optimization idea in approximately two-thirds of cases, whereas they propose a more original idea in the remaining one-third. However, these original ideas only occasionally yield substantial performance gains.