🤖 AI Summary
This study addresses the high energy consumption of large language models (LLMs) in software development, which hinders their sustainable deployment. Focusing on 3B–7B parameter LLMs, the authors conduct a phase-wise energy analysis—separating prefill and decoding stages—during code generation and understanding tasks. They reveal, for the first time, that energy expenditure in the prefill phase amplifies the per-token decoding cost, and identify redundant output behaviors (“babbling”) in certain models. Leveraging fine-grained, phase-level energy measurements alongside benchmark evaluations (HumanEval, LongBench), the work proposes targeted suppression strategies that achieve substantial energy savings of 44% to 89% without compromising task accuracy.
📝 Abstract
Context: AI-assisted tools are increasingly integrated into software development workflows, but their reliance on large language models (LLMs) introduces substantial computational and energy costs. Understanding and reducing the energy footprint of LLM inference is therefore essential for sustainable software development. Objective: In this study, we conduct a phase-level analysis of LLM inference energy consumption, distinguishing between the (1) prefill, where the model processes the input and builds internal representations, and (2) decoding, where output tokens are generated using the stored state. Method: We investigate six 6B-7B and four 3B-4B transformer-based models, evaluating them on code-centric benchmarks HumanEval for code generation and LongBench for code understanding. Results: Our findings show that, within both parameter groups, models exhibit distinct energy patterns across phases. Furthermore, we observed that increases in prefill cost amplify the energy cost per token during decoding, with amplifications ranging from 1.3% to 51.8% depending on the model. Lastly, three out of ten models demonstrate babbling behavior, adding excessive content to the output that unnecessarily inflates energy consumption. We implemented babbling suppression for code generation, achieving energy savings ranging from 44% to 89% without affecting generation accuracy. Conclusion: These findings show that prefill costs influence decoding, which dominates energy consumption, and that babbling suppression can yield up to 89% energy savings. Reducing inference energy therefore requires both mitigating babbling behavior and limiting impact of prefill on decoding.