🤖 AI Summary
To address the prohibitively long training time (8.3 days) and excessive GPU memory consumption of the CHGNet model—which hinder its large-scale deployment for high-accuracy interatomic potential prediction—this work introduces three core optimizations: (1) a novel force-and-stress decoupled readout module to enhance physical consistency and gradient propagation efficiency; (2) GPU kernel fusion and redundant computation bypass strategies to reduce operator scheduling overhead and memory access latency; and (3) a multi-GPU load-balanced distributed training framework integrating memory-aware data parallelism and second-order derivative optimization. Evaluated on a 32×A100 cluster, our approach reduces training time to 1.53 hours (≈130× speedup) and cuts GPU memory usage by 3.59×, while strictly preserving *ab initio*-level prediction accuracy across all target properties.
📝 Abstract
Graph neural network universal interatomic potentials (GNN-UIPs) have demonstrated remarkable generalization and transfer capabilities in material discovery and property prediction. These models can accelerate molecular dynamics (MD) simulation by several orders of magnitude while maintaining extit{ab initio} accuracy, making them a promising new paradigm in material simulations. One notable example is Crystal Hamiltonian Graph Neural Network (CHGNet), pretrained on the energies, forces, stresses, and magnetic moments from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for charge-informed MD simulations. However, training the CHGNet model is time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring multi-layer propagation to reach more distant atom information, (ii) requiring second-order derivatives calculation to finish weights updating and (iii) the implementation of reference CHGNet does not fully leverage the computational capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three contributions: Firstly, we design innovative Force/Stress Readout modules to decompose Force/Stress prediction. Secondly, we adopt massive optimizations such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a load-balancing technique to enhance GPU utilization. Numerical results show that FastCHGNet reduces memory footprint by a factor of 3.59. The final training time of FastCHGNet can be decreased to extbf{1.53 hours} on 32 GPUs without sacrificing model accuracy.