🤖 AI Summary
Deep learning training frequently fails due to GPU out-of-memory (OOM) errors, yet existing OOM prediction methods suffer from critical limitations: static computational graph analysis cannot capture dynamic execution behavior, while GPU-side sampling exacerbates resource contention. To address this, we propose VeritasEst—the first CPU-only, GPU-free dynamic GPU memory consumption estimator. VeritasEst models execution traces of the computational graph, performs fine-grained, operator-level memory lifetime analysis, and infers tensor reuse patterns to enable highly accurate offline peak memory estimation prior to scheduling. Evaluated on CNN models, it reduces relative error by 84% and OOM prediction failure rate by 73%, demonstrating strong robustness and cross-model generalizability across 1,000 experiments. VeritasEst establishes a novel paradigm for GPU memory estimation—zero resource contention, high accuracy, and direct deployability in production schedulers.
📝 Abstract
The benefits of Deep Learning (DL) impose significant pressure on GPU resources, particularly within GPU cluster, where Out-Of-Memory (OOM) errors present a primary impediment to model training and efficient resource utilization. Conventional OOM estimation techniques, relying either on static graph analysis or direct GPU memory profiling, suffer from inherent limitations: static analysis often fails to capture model dynamics, whereas GPU-based profiling intensifies contention for scarce GPU resources. To overcome these constraints, VeritasEst emerges. It is an innovative, entirely CPU-based analysis tool capable of accurately predicting the peak GPU memory required for DL training tasks without accessing the target GPU. This"offline"prediction capability is core advantage of VeritasEst, allowing accurate memory footprint information to be obtained before task scheduling, thereby effectively preventing OOM and optimizing GPU allocation. Its performance was validated through thousands of experimental runs across convolutional neural network (CNN) models: Compared to baseline GPU memory estimators, VeritasEst significantly reduces the relative error by 84% and lowers the estimation failure probability by 73%. VeritasEst represents a key step towards efficient and predictable DL training in resource-constrained environments.