An Experimental Study of Real-Life LLM-Proposed Performance Improvements

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the capability of large language models (LLMs) to generate high-performance Java code for real-world performance optimization tasks. Method: Leveraging 65 industrial-grade performance bottlenecks, we employ an automated patch generation pipeline integrating two state-of-the-art LLMs and four prompt engineering strategies to produce optimization patches, rigorously comparing them against baseline implementations and human expert solutions. Contribution/Results: We present the first large-scale, reproducible benchmark for LLM-assisted performance optimization. Empirical results reveal that ~33% of LLM-generated suggestions introduce novel optimization ideas; however, although most patches improve performance, their average speedup is significantly lower than that achieved by human experts (p < 0.01), and novelty does not guarantee substantive gains. The work establishes a foundational evaluation framework and exposes critical limitations of current LLMs in deep, system-level performance optimization—highlighting key directions for future improvement.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can generate code, but can they generate fast code? In this paper, we study this question using a dataset of 65 real-world tasks mined from open-source Java programs. We specifically select tasks where developers achieved significant speedups, and employ an automated pipeline to generate patches for these issues using two leading LLMs under four prompt variations. By rigorously benchmarking the results against the baseline and human-authored solutions, we demonstrate that LLM-generated code indeed improves performance over the baseline in most cases. However, patches proposed by human developers outperform LLM fixes by a statistically significant margin, indicating that LLMs often fall short of finding truly optimal solutions. We further find that LLM solutions are semantically identical or similar to the developer optimization idea in approximately two-thirds of cases, whereas they propose a more original idea in the remaining one-third. However, these original ideas only occasionally yield substantial performance gains.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to generate performance-optimized code patches
Comparing LLM-generated optimizations against human developer solutions
Assessing originality and effectiveness of LLM-proposed performance improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated patch generation using LLMs
Benchmarking LLM patches against human solutions
Analyzing semantic similarity of optimization ideas
🔎 Similar Papers
No similar papers found.
L
Lirong Yi
Chalmers University of Technology and University of Gothenburg, Sweden
Gregory Gay
Gregory Gay
Chalmers University of Technology and University of Gothenburg
Software TestingSearch-Based Software EngineeringAI4SEAutomated Software Engineering
P
Philipp Leitner
Chalmers University of Technology and University of Gothenburg, Sweden