🤖 AI Summary
To address scalability bottlenecks—including memory explosion, high inter-device communication overhead, and load imbalance—caused by sparse embedding tables in large-scale recommendation models, this paper proposes 2D-Sparse Parallel: a hybrid parallelism architecture that jointly partitions embedding tables along both rows and columns by integrating model parallelism with data parallelism. We design Momentum-Scaled Row-wise AdaGrad, an adaptive optimizer that scales per-row momentum to reduce activation memory peaks and mitigate gradient update skew. Furthermore, we optimize the All-to-All communication pattern to minimize cross-device embedding lookup latency. Evaluated on a 4096-GPU cluster, our approach achieves near-linear weak scaling (92.5% efficiency), improves training throughput by 3.1×, reduces memory footprint by 37%, and preserves model accuracy—establishing a new state-of-the-art for distributed training of recommendation systems.
📝 Abstract
The increasing complexity of deep learning recommendation models (DLRM) has led to a growing need for large-scale distributed systems that can efficiently train vast amounts of data. In DLRM, the sparse embedding table is a crucial component for managing sparse categorical features. Typically, these tables in industrial DLRMs contain trillions of parameters, necessitating model parallelism strategies to address memory constraints. However, as training systems expand with massive GPUs, the traditional fully parallelism strategies for embedding table post significant scalability challenges, including imbalance and straggler issues, intensive lookup communication, and heavy embedding activation memory. To overcome these limitations, we propose a novel two-dimensional sparse parallelism approach. Rather than fully sharding tables across all GPUs, our solution introduces data parallelism on top of model parallelism. This enables efficient all-to-all communication and reduces peak memory consumption. Additionally, we have developed the momentum-scaled row-wise AdaGrad algorithm to mitigate performance losses associated with the shift in training paradigms. Our extensive experiments demonstrate that the proposed approach significantly enhances training efficiency while maintaining model performance parity. It achieves nearly linear training speed scaling up to 4K GPUs, setting a new state-of-the-art benchmark for recommendation model training.