Language Models for Code Optimization: Survey, Challenges and Future Directions

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical bottlenecks—effectiveness, robustness, and trustworthiness—that hinder large language models (LLMs) in code performance optimization. We conduct the first systematic literature review (SLR) dedicated to LLM-powered code optimization, synthesizing over 50 state-of-the-art studies. Leveraging empirical software engineering analysis and a challenge-solution mapping framework, we distill 11 core research questions, identify five open challenges—including complexity-usability trade-offs, limited cross-scenario generalization, and insufficient user trust—and propose eight actionable future directions. Crucially, we establish the first comprehensive landscape of LLM-based code optimization research, delivering an authoritative benchmark suite, practical implementation guidelines, and a community-oriented research roadmap. This work provides methodological foundations and technical evolution insights for both academia and industry.

Technology Category

Application Category

📝 Abstract
Language models (LMs) built upon deep neural networks (DNNs) have recently demonstrated breakthrough effectiveness in software engineering tasks like code generation, code completion, and code repair. This has paved the way for the emergence of LM-based code optimization techniques, which are pivotal for enhancing the performance of existing programs, such as accelerating program execution time. However, a comprehensive survey dedicated to this specific application has been lacking. To address this gap, we present a systematic literature review of over 50 primary studies, identifying emerging trends and addressing 11 specialized questions. The results disclose five critical open challenges, such as balancing model complexity with practical usability, enhancing generalizability, and building trust in AI-powered solutions. Furthermore, we provide eight future research directions to facilitate more efficient, robust, and reliable LM-based code optimization. Thereby, this study seeks to provide actionable insights and foundational references for both researchers and practitioners in this rapidly evolving field.
Problem

Research questions and friction points this paper is trying to address.

Code Optimization
Language Models
AI Trust
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language Model
Code Optimization
Comprehensive Review
🔎 Similar Papers
No similar papers found.