🤖 AI Summary
To address severe weight staleness, low GPU memory efficiency, and slow convergence in multi-GPU training, this paper proposes two novel pipeline parallelism frameworks: V-TiMePReSt and I-TiMePReSt. V-TiMePReSt enforces strict synchronization to achieve zero staleness. I-TiMePReSt introduces the first intermediate-weight interpolation strategy, which mathematically models and dynamically quantifies the gradient contribution of stale weights during backpropagation—balancing convergence speed and accuracy without increasing memory overhead. Both frameworks integrate pipeline scheduling, fine-grained weight caching, and staleness-aware backpropagation. Experiments demonstrate that V-TiMePReSt significantly reduces staleness and improves GPU memory utilization. I-TiMePReSt achieves 18–32% faster convergence than baseline methods under comparable memory consumption, with negligible accuracy degradation (<0.3%).
📝 Abstract
High resource requirement for Deep Neural Network (DNN) training across multiple GPUs necessitates development of various parallelism techniques. In this paper, we introduce two interconnected DNN training frameworks, namely, V-TiMePReSt and I-TiMePReSt, based on pipeline parallelism, a variant of model parallelism. V-TiMePReSt is a completely staleness-free system which enables the DNNs to be trained on the latest updated weights in each stage of all forward and backward passes. Developing staleness-aware systems at the expense of weight stashing reduces GPU-memory consumption, however, increases the number of epochs to converge. Thus, we introduce I-TiMePReSt, which is also a staleness-aware system, but not at the expense of weight stashing. It does not rely solely on the stale weights or the latest updated weights. I-TiMePReSt computes an intermediate weight towards the latter and performs backward pass on it. Additionally, we formulate the significance of the stale weights mathematically depending on the degree of staleness. In contrast to V-TiMePReSt, I-TiMePReSt works based on the assumption that stale weights have a significant contribution in training, which can be quantified mathematically based on the degree of staleness, although there are other contributory factors which should not be ignored. Experimental results show that V-TiMePReSt is advantageous over existing models in terms of $1)$ the extent of staleness of the weight parameter values and $2)$ GPU memory efficiency, while I-TiMePReSt is superior in terms of $1)$ removing staleness of the weight parameters without removing weight stashing and $2)$ maintaining the trade-off between GPU memory consumption and convergence speed (number of epochs).