🤖 AI Summary
To address the high communication overhead of the Lion optimizer in distributed deep learning over Ethernet—where it becomes a training bottleneck—this paper proposes a lightweight communication-optimization framework. Methodologically, it first identifies the sign-dominant property of Lion’s update vectors and designs a sign-aware quantization strategy; second, it introduces selective momentum synchronization to eliminate full-momentum transmission without compromising convergence; third, it establishes a coordinated compression-communication paradigm integrating a streamlined AllReduce algorithm with majority-voting decoding. Experiments on Ethernet-based clusters demonstrate end-to-end training speedups of up to 5×, an 80% reduction in communication volume, and convergence behavior indistinguishable from full-precision Lion. The framework thus achieves substantial communication efficiency gains while preserving optimization fidelity and scalability.
📝 Abstract
Communication overhead is a key challenge in distributed deep learning, especially on slower Ethernet interconnects, and given current hardware trends, communication is likely to become a major bottleneck. While gradient compression techniques have been explored for SGD and Adam, the Lion optimizer has the distinct advantage that its update vectors are the output of a sign operation, enabling straightforward quantization. However, simply compressing updates for communication and using techniques like majority voting fails to lead to end-to-end speedups due to inefficient communication algorithms and reduced convergence. We analyze three factors critical to distributed learning with Lion: optimizing communication methods, identifying effective quantization methods, and assessing the necessity of momentum synchronization. Our findings show that quantization techniques adapted to Lion and selective momentum synchronization can significantly reduce communication costs while maintaining convergence. We combine these into Lion Cub, which enables up to 5x speedups in end-to-end training compared to Lion. This highlights Lion's potential as a communication-efficient solution for distributed training.