🤖 AI Summary
To address GPU memory bottlenecks hindering large-scale AI models (e.g., LLMs, AlphaFold 2) in scientific domains—including biology, medicine, chemistry, and meteorology—this work presents a systematic survey and architectural reconstruction of Transformer memory optimization techniques tailored for scientific computing. We introduce, for the first time, a *domain-customized optimization paradigm*, integrating distributed training, mixed-precision arithmetic, gradient checkpointing, model parallelism, and sequence chunking. Furthermore, we propose lossless memory compression schemes—such as structure-aware compression specifically designed for AlphaFold 2’s architecture. Experimental evaluation demonstrates up to 70% reduction in GPU memory consumption while rigorously preserving accuracy on scientific prediction tasks. Our framework establishes a scalable, high-fidelity infrastructure for efficiently training scientific foundation models.
📝 Abstract
Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. To address this, we review memory-efficient training techniques for LLMs based on the transformer architecture, including distributed training, mixed precision training, and gradient checkpointing. Using AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods can reduce storage needs while preserving prediction accuracy. We also discuss the challenges of memory optimization in practice and potential future directions, hoping to provide valuable insights for researchers and engineers.