Ethereal: Divide and Conquer Network Load Balancing in Large-Scale Distributed Training

📅 2024-06-30
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the network communication bottleneck in large-scale distributed training over CLOS topologies, challenging the industry consensus that packet spraying is essential for high-throughput collective communication. We propose a single-path RDMA transmission optimization: leveraging the deterministic flow characteristics of collective operations, we dynamically split and route flows across CLOS multipaths at the application layer; we further design a topology-aware distributed load-balancing algorithm and a lightweight protocol extension, fully compatible with existing RDMA NICs. We provide the first theoretical proof that single-path transmission can asymptotically approach the performance of ideal packet spraying. Astra-Sim evaluations show that our approach reduces collective completion time by up to 30% versus packet spraying and by 40% versus REPS, while maintaining robustness under link failures.

Technology Category

Application Category

📝 Abstract
Large-scale distributed training in production datacenters constitutes a challenging workload bottlenecked by network communication. In response, both major industry players (e.g., Ultra Ethernet Consortium) and parts of academia have surprisingly, and almost unanimously, agreed that packet spraying is emph{necessary} to improve the performance of large-scale distributed training workloads. In this paper, we challenge this prevailing belief and pose the question: emph{How close can singlepath transport come to matching the performance of packet spraying?} We demonstrate that singlepath transport (from a NIC's perspective) is sufficient and can perform nearly as well as ideal packet spraying, particularly in the context of distributed training in CLOS-based topologies. Our assertion is based on four key observations about workloads driven by collective communication patterns: emph{(i)} flow sizes are known upon arrival, emph{(ii)} flow sizes are equal within each step of a collective, emph{(iii)} the completion time of a collective is more critical than individual flow completion times, and emph{(iv)} flows can be emph{split} upon arrival to control load balancing directly from the application layer. We present Ethereal, a simple distributed load balancing algorithm that opportunistically splits flows and assigns paths to each flow in a transparent manner, requiring little to no changes to existing RDMA NICs. Our evaluation, spanning a wide range of collective communication algorithms and GPT models using Astra-Sim, shows that Ethereal significantly reduces the completion times by up to $30%$ compared to packet spraying and by up to $40%$ compared to REPS, even under link failures. This paper offers an alternative perspective for developing next-generation transport protocols tailored to large-scale distributed training.
Problem

Research questions and friction points this paper is trying to address.

Challenge packet spraying in distributed training
Evaluate singlepath transport performance
Propose Ethereal for load balancing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Singlepath transport matches packet spraying performance
Flow splitting optimizes load balancing application layer
Ethereal algorithm reduces collective completion times significantly
🔎 Similar Papers
No similar papers found.