🤖 AI Summary
Existing CPU-based DVFS latency models exhibit large errors on GPUs and fail to capture the coupled frequency-voltage-latency relationship, hindering energy-efficient DNN inference optimization. Method: We propose the first GPU microarchitecture-aware DVFS-sensitive inference latency model, driven by multi-device empirical measurements and integrating modular DNN decomposition with joint frequency-voltage-latency curve fitting. The model enables fine-grained, block-level latency estimation and empirical validation. Furthermore, we design a dual-path optimization framework—local and collaborative—leveraging the model for hardware-aware scheduling. Contribution/Results: Local optimization achieves ≥66% latency reduction and ≥69% energy savings; collaborative optimization significantly improves task partitioning quality, consistently outperforming CPU-DVFS baselines. Our work overcomes the cross-platform transferability limitation of prior DVFS models and establishes a verifiable theoretical foundation and practical toolkit for GPU-accelerated DNN energy-latency co-optimization.
📝 Abstract
The rapid development of deep neural networks (DNNs) is inherently accompanied by the problem of high computational costs. To tackle this challenge, dynamic voltage frequency scaling (DVFS) is emerging as a promising technology for balancing the latency and energy consumption of DNN inference by adjusting the computing frequency of processors. However, most existing models of DNN inference time are based on the CPU-DVFS technique, and directly applying the CPU-DVFS model to DNN inference on GPUs will lead to significant errors in optimizing latency and energy consumption. In this paper, we propose a DVFS-aware latency model to precisely characterize DNN inference time on GPUs. We first formulate the DNN inference time based on extensive experiment results for different devices and analyze the impact of fitting parameters. Then by dividing DNNs into multiple blocks and obtaining the actual inference time, the proposed model is further verified. Finally, we compare our proposed model with the CPU-DVFS model in two specific cases. Evaluation results demonstrate that local inference optimization with our proposed model achieves a reduction of no less than 66% and 69% in inference time and energy consumption respectively. In addition, cooperative inference with our proposed model can improve the partition policy and reduce the energy consumption compared to the CPU-DVFS model.