🤖 AI Summary
Large language models (LLMs) incur prohibitively high energy consumption during code generation, hindering sustainable AI-powered programming.
Method: This study systematically evaluates the energy efficiency and functional performance of open-source small language models (SLMs)—including StableCode-3B, StarCoderBase-3B, and Qwen2.5-Coder-3B-Instruct—against representative LLMs (GPT-4, DeepSeek-Reasoner) on a benchmark of 150 LeetCode problems. We quantify runtime, memory footprint, energy consumption (measured via hardware power monitoring), and functional correctness.
Contribution/Results: We find that SLMs generate functionally correct code with energy consumption no higher than that of LLMs in over 52% of tasks—and substantially lower in several cases. Moreover, for simple-to-moderately complex coding tasks, SLMs achieve comparable correctness rates at significantly reduced computational overhead. These results provide empirical validation for energy-efficient, deployable alternatives to LLMs in practical code generation, advancing sustainable AI development.
📝 Abstract
Large Language Models (LLMs) are widely used for code generation. However, commercial models like ChatGPT require significant computing power, which leads to high energy use and carbon emissions. This has raised concerns about their environmental impact. In this study, we evaluate open-source Small Language Models (SLMs) trained explicitly for code generation and compare their performance and energy efficiency against large LLMs and efficient human-written Python code. The goal is to investigate whether SLMs can match the performance of LLMs on certain types of programming problems while producing more energy-efficient code. We evaluate 150 coding problems from LeetCode, evenly distributed across three difficulty levels: easy, medium, and hard. Our comparison includes three small open-source models, StableCode-3B, StarCoderBase-3B, and Qwen2.5-Coder-3B-Instruct, and two large commercial models, GPT-4.0 and DeepSeek-Reasoner. The generated code is evaluated using four key metrics: run-time, memory usage, energy consumption, and correctness. We use human-written solutions as a baseline to assess the quality and efficiency of the model-generated code. Results indicate that LLMs achieve the highest correctness across all difficulty levels, but SLMs are often more energy-efficient when their outputs are correct. In over 52% of the evaluated problems, SLMs consumed the same or less energy than LLMs.