🤖 AI Summary
To address energy drift and violation of equipartition among degrees of freedom caused by large time steps in molecular dynamics (MD) simulations, this work proposes a variational-principles-based machine learning integrator. The method employs neural networks to directly learn the system’s action functional, thereby constructing a data-driven, symplectic and time-reversible mapping that intrinsically satisfies Hamiltonian constraints. Unlike black-box state predictors, our framework formulates numerical integration as a differentiable action minimization process and incorporates an iterative correction mechanism to enhance long-term stability. Extensive validation across multiple molecular systems demonstrates that the integrator enables time steps 5–10× larger than conventional Verlet integration, reduces energy error by one to two orders of magnitude, and perfectly preserves energy equipartition across all degrees of freedom. Consequently, it significantly improves both accuracy and efficiency for nanosecond- to microsecond-scale MD simulations.
📝 Abstract
The equations of classical mechanics can be used to model the time evolution of countless physical systems, from the astrophysical to the atomic scale. Accurate numerical integration requires small time steps, which limits the computational efficiency -- especially in cases such as molecular dynamics that span wildly different time scales. Using machine-learning (ML) algorithms to predict trajectories allows one to greatly extend the integration time step, at the cost of introducing artifacts such as lack of energy conservation and loss of equipartition between different degrees of freedom of a system. We propose learning data-driven structure-preserving (symplectic and time-reversible) maps to generate long-time-step classical dynamics, showing that this method is equivalent to learning the mechanical action of the system of interest. We show that an action-derived ML integrator eliminates the pathological behavior of non-structure-preserving ML predictors, and that the method can be applied iteratively, serving as a correction to computationally cheaper direct predictors.