FastCHGNet: Training one Universal Interatomic Potential to 1.5 Hours with 32 GPUs

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively long training time (8.3 days) and excessive GPU memory consumption of the CHGNet model—which hinder its large-scale deployment for high-accuracy interatomic potential prediction—this work introduces three core optimizations: (1) a novel force-and-stress decoupled readout module to enhance physical consistency and gradient propagation efficiency; (2) GPU kernel fusion and redundant computation bypass strategies to reduce operator scheduling overhead and memory access latency; and (3) a multi-GPU load-balanced distributed training framework integrating memory-aware data parallelism and second-order derivative optimization. Evaluated on a 32×A100 cluster, our approach reduces training time to 1.53 hours (≈130× speedup) and cuts GPU memory usage by 3.59×, while strictly preserving *ab initio*-level prediction accuracy across all target properties.

Technology Category

Application Category

📝 Abstract
Graph neural network universal interatomic potentials (GNN-UIPs) have demonstrated remarkable generalization and transfer capabilities in material discovery and property prediction. These models can accelerate molecular dynamics (MD) simulation by several orders of magnitude while maintaining extit{ab initio} accuracy, making them a promising new paradigm in material simulations. One notable example is Crystal Hamiltonian Graph Neural Network (CHGNet), pretrained on the energies, forces, stresses, and magnetic moments from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for charge-informed MD simulations. However, training the CHGNet model is time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring multi-layer propagation to reach more distant atom information, (ii) requiring second-order derivatives calculation to finish weights updating and (iii) the implementation of reference CHGNet does not fully leverage the computational capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three contributions: Firstly, we design innovative Force/Stress Readout modules to decompose Force/Stress prediction. Secondly, we adopt massive optimizations such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a load-balancing technique to enhance GPU utilization. Numerical results show that FastCHGNet reduces memory footprint by a factor of 3.59. The final training time of FastCHGNet can be decreased to extbf{1.53 hours} on 32 GPUs without sacrificing model accuracy.
Problem

Research questions and friction points this paper is trying to address.

Crystal Hamiltonian Graph Neural Network
training efficiency
prediction speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

FastCHGNet
Multi-GPU Parallel Processing
Efficient Memory Utilization
🔎 Similar Papers
No similar papers found.
Y
Yuanchang Zhou
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Siyu Hu
Siyu Hu
Institute of Computing Technology, Chinese Academy of Sciences
AI4SHPC
C
Chen Wang
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Lin-Wang Wang
Lin-Wang Wang
Institute of Semiconductor, Chinese Academy of Sciences; University of Chinese Academy of Sciences
G
Guangming Tan
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
W
Weile Jia
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences