How to use model architecture and training environment to estimate the energy consumption of DL training

📅 2023-07-07
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep learning training energy estimation methods rely on unvalidated assumptions, resulting in high estimation errors. Method: This paper empirically uncovers the co-dependent energy consumption patterns between model architectures and hardware environments through multi-dimensional temporal monitoring (power, FLOPs, compute throughput, accuracy, etc.), regression modeling, energy-efficiency trade-off analysis, and cross-platform experiments. Contribution/Results: We propose four interpretable, high-accuracy energy estimation algorithms—reducing average error by 50%. For the first time, we establish a quantitative relationship among architecture, hardware environment, energy consumption, and model correctness. We discover that GPU selection should dynamically match model computational complexity to maximize energy efficiency; joint optimization achieves 80.72% energy reduction with <0.1% accuracy degradation. Our findings invalidate the reliability assumptions underpinning mainstream estimation approaches, and we open-source a lightweight energy prediction toolchain.
📝 Abstract
To raise awareness of the huge impact Deep Learning (DL) has on the environment, several works have tried to estimate the energy consumption and carbon footprint of DL-based systems across their life cycle. However, the estimations for energy consumption in the training stage usually rely on assumptions that have not been thoroughly tested. This study aims to move past these assumptions by leveraging the relationship between energy consumption and two relevant design decisions in DL training; model architecture, and training environment. To investigate these relationships, we collect multiple metrics related to energy efficiency and model correctness during the models' training. Then, we outline the trade-offs between the measured energy consumption and the models' correctness regarding model architecture, and their relationship with the training environment. Finally, we study the training's power consumption behavior and propose four new energy estimation methods. Our results show that selecting the proper model architecture and training environment can reduce energy consumption dramatically (up to 80.72%) at the cost of negligible decreases in correctness. Also, we find evidence that GPUs should scale with the models' computational complexity for better energy efficiency. Furthermore, we prove that current energy estimation methods are unreliable and propose alternatives 2x more precise.
Problem

Research questions and friction points this paper is trying to address.

Estimating deep learning energy consumption accurately using model architecture and training environment
Analyzing trade-offs between energy efficiency and model accuracy across different configurations
Addressing limitations of common estimation practices like FLOPs or GPU TDP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigating model architecture and training environment energy effects
Proposing STEP method for stable training epoch projection
Developing PRE method for pre-training regression-based estimation
🔎 Similar Papers
No similar papers found.
S
Santiago del Rey
Universitat Politècnica de Catalunya , Barcelona, Spain
S
Silverio Martínez-Fernández
Universitat Politècnica de Catalunya , Barcelona, Spain
L
Lu'is Cruz
Delft University of Technology , Delft, Netherlands
X
Xavier Franch
Universitat Politècnica de Catalunya , Barcelona, Spain