How Efficient is LLM-Generated Code? A Rigorous & High-Standard Benchmark

๐Ÿ“… 2024-06-10
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 7
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing code generation evaluation benchmarks overemphasize functional correctness while neglecting execution efficiency. Method: We introduce ENAMEL, the first systematic benchmark for assessing the computational efficiency of large language model (LLM)-generated code. ENAMEL comprises (1) a curated suite of expert-level optimal algorithms and high-discriminative test cases; (2) an efficiency-aware metric, eff@k, coupled with a Raoโ€“Blackwellized unbiased estimator that integrates right-censored runtime modeling and human-in-the-loop annotation; and (3) a human-expert-defined efficiency gold standard. Contribution/Results: Empirical evaluation across 30 mainstream LLMs reveals that current models consistently fail to produce expert-level efficient code. The primary bottlenecks are deficits in high-level algorithmic design capability and insufficient awareness of low-level implementation optimizations. ENAMEL establishes a novel, reproducible evaluation dimension and benchmarking framework for code generation research.

Technology Category

Application Category

๐Ÿ“ Abstract
The emergence of large language models (LLMs) has significantly pushed the frontiers of program synthesis. Advancement of LLM-based program synthesis calls for a thorough evaluation of LLM-generated code. Most evaluation frameworks focus on the (functional) correctness of generated code; efficiency, as an important measure of code quality, has been overlooked in existing evaluations. In this work, we develop ENAMEL (EfficeNcy AutoMatic EvaLuator), a rigorous and high-standard benchmark for evaluating the capability of LLMs in generating efficient code. Firstly, we propose a new efficiency metric called eff@k, which generalizes the pass@k metric from correctness to efficiency and appropriately handles right-censored execution time. Furthermore, we derive an unbiased and variance-reduced estimator of eff@k via Rao--Blackwellization; we also provide a numerically stable implementation for the new estimator. Secondly, to set a high-standard for efficiency evaluation, we employ a human expert to design best algorithms and implementations as our reference solutions of efficiency, many of which are much more efficient than existing canonical solutions in HumanEval and HumanEval+. Moreover, to ensure a rigorous evaluation, we employ a human expert to curate strong test case generators to filter out wrong code and differentiate suboptimal algorithms. An extensive study across 30 popular LLMs using our benchmark ENAMEL shows that LLMs still fall short of generating expert-level efficient code. Using two subsets of our problem set, we demonstrate that such deficiency is because current LLMs struggle in designing advanced algorithms and are barely aware of implementation optimization. Our benchmark is publicly available at https://github.com/q-rz/enamel .
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Code Efficiency
Algorithm Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

ENAMEL Tool
eff@k Evaluation Metric
High-performance Algorithm Benchmark
๐Ÿ”Ž Similar Papers
No similar papers found.