๐ค AI Summary
Compiler optimization reports are highly technical, difficult to interpret, and challenging to operationalize. To address this, we propose CompilerGPTโthe first end-to-end framework that deeply integrates large language models (LLMs) into the compiler optimization feedback loop. CompilerGPT synergistically combines GPT-4o and Claude Sonnet with static analysis, structured prompt engineering, test-driven feedback, and multi-round iterative execution to enable cross-compiler (Clang/GCC) report parsing and executable code rewriting. Its core innovation lies in establishing a verifiable, automated optimization workflow that closes the loop from report comprehension to measurable performance improvement. Evaluated on five benchmark programs, CompilerGPT achieves up to 6.5ร runtime speedup, empirically demonstrating the feasibility and effectiveness of LLM-driven automation for compiler optimizations.
๐ Abstract
Current compiler optimization reports often present complex, technical information that is difficult for programmers to interpret and act upon effectively. This paper assesses the capability of large language models (LLM) to understand compiler optimization reports and automatically rewrite the code accordingly. To this end, the paper introduces CompilerGPT, a novel framework that automates the interaction between compilers, LLMs, and user defined test and evaluation harness. CompilerGPT's workflow runs several iterations and reports on the obtained results. Experiments with two leading LLM models (GPT-4o and Claude Sonnet), optimization reports from two compilers (Clang and GCC), and five benchmark codes demonstrate the potential of this approach. Speedups of up to 6.5x were obtained, though not consistently in every test. This method holds promise for improving compiler usability and streamlining the software optimization process.