A Survey on Memory-Efficient Large-Scale Model Training in AI for Science

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory bottlenecks hindering large-scale AI models (e.g., LLMs, AlphaFold 2) in scientific domains—including biology, medicine, chemistry, and meteorology—this work presents a systematic survey and architectural reconstruction of Transformer memory optimization techniques tailored for scientific computing. We introduce, for the first time, a *domain-customized optimization paradigm*, integrating distributed training, mixed-precision arithmetic, gradient checkpointing, model parallelism, and sequence chunking. Furthermore, we propose lossless memory compression schemes—such as structure-aware compression specifically designed for AlphaFold 2’s architecture. Experimental evaluation demonstrates up to 70% reduction in GPU memory consumption while rigorously preserving accuracy on scientific prediction tasks. Our framework establishes a scalable, high-fidelity infrastructure for efficiently training scientific foundation models.

Technology Category

Application Category

📝 Abstract
Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. To address this, we review memory-efficient training techniques for LLMs based on the transformer architecture, including distributed training, mixed precision training, and gradient checkpointing. Using AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods can reduce storage needs while preserving prediction accuracy. We also discuss the challenges of memory optimization in practice and potential future directions, hoping to provide valuable insights for researchers and engineers.
Problem

Research questions and friction points this paper is trying to address.

Memory-efficient
Large Language Models (LLM)
Training Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory Efficiency
Distributed Computing
Reduced Precision Computation
🔎 Similar Papers
No similar papers found.
K
Kaiyuan Tian
Linbo Qiao
Linbo Qiao
NUDT
Stochastic OptimizationDistributed OptimizationLarge-scale Machine Learning
B
Baihui Liu
G
Gongqingjian Jiang
D
Dongsheng Li