🤖 AI Summary
In large-model training, activation recomputation and communication pipelining are difficult to overlap, leading to high critical-path latency and severe memory-compute-communication imbalance. To address this, we propose Lynx—a framework that introduces (1) a fine-grained execution mechanism enabling precise overlap of recomputation and communication; (2) a heuristic scheduling algorithm leveraging model structural similarity to jointly optimize recomputation timing and communication phases; and (3) a recomputation-aware model partitioning strategy that balances GPU memory constraints with load distribution across pipeline stages. Experiments on GPT models ranging from 1.3B to 23B parameters demonstrate that Lynx achieves up to 1.37× higher training throughput compared to state-of-the-art recomputation schemes, significantly alleviates GPU memory bottlenecks, and improves overall system resource utilization.
📝 Abstract
Large model training often uses recomputation to alleviate memory pressure and pipelines to exploit the parallelism of data, tensors, and devices. However, existing recomputation approaches may incur high overhead when training real-world models, as they are executed on demand in the critical training path. In this paper, we present Lynx, a new recomputation framework to reduce overhead by overlapping recomputation with communication in training pipelines. To reduce the large search space for recomputation strategies, we propose a heuristic-based recomputation scheduling algorithm, which is based on the observation that there are identical structures in large DNN models so that we can apply the same scheduling policy to all such structures. Additionally, we propose a recomputation-aware model partitioning method to balance each stage's execution time for improved training throughput. Our comprehensive evaluation using GPT models with 1.3B-23B parameters shows that Lynx outperforms existing recomputation approaches by up to 1.37x.