🤖 AI Summary
Existing approaches struggle to accurately and generalizably predict the execution latency of deep neural networks across diverse data types and GPU hardware. This work proposes a kernel-aware latency modeling method that leverages the GPU’s SIMT architecture to finely characterize computational behavior and memory access patterns, enabling differentiated modeling of complex kernels such as Triton, Flash Attention, and Cutlass Attention. By moving beyond conventional reliance on deep learning-based predictors or handcrafted heuristics, the proposed approach achieves prediction errors below 10% for Transformer models across both FP32 and BF16 data types and multiple hardware platforms. It substantially outperforms NeuSight—by 10–20% under FP32 and over 50% under BF16—with particularly low errors of only 3–8% on complex attention kernels.
📝 Abstract
We present PM2Lat, a fast and generalized framework for accurately predicting the latency of deep neural network models on GPUs, with special focus on NVIDIA. Unlike prior methods that rely on deep learning models or handcrafted heuristics, PM2Lat leverages the Single-Instruction-Multiple-Thread architecture of GPUs to model execution time of DNN models. First, we dive into fine-grained GPU operation modeling by studying computational behavior and memory access patterns. After identifying these characteristics, we found that different GPU kernels exhibit significant performance disparities, even when serving the same purpose. Hence, the core idea of PM2Lat is to differentiate kernels based on their configurations and analyze them accordingly. This kernel-aware modeling enables PM2Lat to achieve consistently low prediction error across diverse data types and hardware platforms. In addition, PM2Lat generalizes beyond standard matrix multiplication to support complex GPU kernels such as Triton, Flash Attention, and Cutlass Attention. Experimental results show that PM2Lat consistently achieves error rates below 10% across different data types and hardware platforms on Transformer models, outperforming the state-of-the-art NeuSight by 10-20% for FP32 and by at least 50% for BF16. When applying to diverse kernels, the error rate is maintained at 3-8%.