Research on Model Parallelism and Data Parallelism Optimization Methods in Large Language Model-Based Recommendation Systems

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address prominent computation and communication bottlenecks in distributed training of large language model (LLM)-based recommender systems, this paper proposes a hybrid parallel architecture integrating tensor parallelism, pipeline parallelism, and asynchronous data parallelism. It innovatively introduces an adaptive load-balancing mechanism and an efficient sparse gradient aggregation communication framework—supporting both gradient compression and sparsification. The design ensures scalability and robustness under online deployment. Experiments on real-world recommendation datasets demonstrate that the proposed approach achieves over 30% improvement in training throughput and approximately 20% higher GPU resource utilization compared to baseline methods. These results validate its strong scalability and training stability, establishing significant advances in efficient large-scale LLM recommender training.

Technology Category

Application Category

📝 Abstract
With the rapid adoption of large language models (LLMs) in recommendation systems, the computational and communication bottlenecks caused by their massive parameter sizes and large data volumes have become increasingly prominent. This paper systematically investigates two classes of optimization methods-model parallelism and data parallelism-for distributed training of LLMs in recommendation scenarios. For model parallelism, we implement both tensor parallelism and pipeline parallelism, and introduce an adaptive load-balancing mechanism to reduce cross-device communication overhead. For data parallelism, we compare synchronous and asynchronous modes, combining gradient compression and sparsification techniques with an efficient aggregation communication framework to significantly improve bandwidth utilization. Experiments conducted on a real-world recommendation dataset in a simulated service environment demonstrate that our proposed hybrid parallelism scheme increases training throughput by over 30% and improves resource utilization by approximately 20% compared to traditional single-mode parallelism, while maintaining strong scalability and robustness. Finally, we discuss trade-offs among different parallel strategies in online deployment and outline future directions involving heterogeneous hardware integration and automated scheduling technologies.
Problem

Research questions and friction points this paper is trying to address.

Optimize model and data parallelism for LLM-based recommendation systems
Reduce communication overhead in distributed LLM training
Improve training throughput and resource utilization efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid model-data parallelism for LLM training
Adaptive load-balancing reduces communication overhead
Gradient compression boosts bandwidth utilization
🔎 Similar Papers
No similar papers found.
H
Haowei Yang
Cullen College of Engineering, University of Houston, Houston, TX, USA
Z
Zhao Wang
School of Computer Science and Big Data, Fuzhou University, Fuzhou, Fujian, China
Y
Yu Tian
Khoury College of Computer Science, Northeastern University, Seattle, WA, USA
Chengrui Zhou
Chengrui Zhou
Columbia University
Z
Zhongheng Yang
Khoury College of Computer Sciences, Northeastern University, Jersey City, NJ, USA
D
Dannier Li
School of Computing, University of Nebraska-Lincoln, Lincoln, NE, USA