TAGC: Optimizing Gradient Communication in Distributed Transformer Training

📅 2025-03-30
🏛️ Proceedings of the 5th Workshop on Machine Learning and Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high gradient synchronization overhead in distributed training of large Transformer models—particularly under Fully Sharded Data Parallel (FSDP)—which limits scalability, this paper proposes a structure-aware gradient compression method. Our approach features: (1) a layer-selective compression mechanism that models inter-layer gradient importance to enable adaptive sparsification; and (2) lossless homomorphic compression tailored to FSDP’s sharding architecture, enabling compression prior to cross-GPU gradient aggregation. Integrated efficiently into PyTorch’s FSDP framework, the method incurs negligible accuracy degradation (<0.3% perplexity or task-specific metric loss) across multiple LLM training benchmarks, while delivering up to 15% end-to-end training speedup. The implementation is open-sourced.

Technology Category

Application Category

📝 Abstract
The increasing complexity of large language models (LLMs) necessitates efficient training strategies to mitigate the high computational costs associated with distributed training. A significant bottleneck in this process is gradient synchronization across multiple GPUs, particularly in the zero-redundancy parallelism mode. In this paper, we introduce Transformer-Aware Gradient Compression (TAGC), an optimized gradient compression algorithm designed specifically for transformer-based models. TAGC extends the lossless homomorphic compression method by adapting it for sharded models and incorporating transformer-specific optimizations, such as layer-selective compression and dynamic sparsification. Our experimental results demonstrate that TAGC accelerates training by up to 15% compared to the standard Fully Sharded Data Parallel (FSDP) approach, with minimal impact on model quality. We integrate TAGC into the PyTorch FSDP framework, the implementation is publicly available at https://github.com/ipolyakov/TAGC.
Problem

Research questions and friction points this paper is trying to address.

Optimizing gradient synchronization in distributed transformer training
Reducing computational costs for large language models
Enhancing efficiency of gradient compression for sharded models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-Aware Gradient Compression (TAGC)
Layer-selective compression for transformers
Dynamic sparsification in gradient communication
🔎 Similar Papers
No similar papers found.
I
Igor Polyakov
VK, ITMO University, Russia
A
Alexey Dukhanov
ITMO University, Russia
Egor Spirin
Egor Spirin
Raiffeisen
Deep LearningDistributed TrainingNLPVoice