đ€ AI Summary
This work addresses the long-term numerical instability of data-driven models for non-canonical Hamiltonian systems. The core challenge lies in the reliance of conventional numerical integrators on canonical symplectic structure, rendering learned models unstable when applied to non-canonical settings. To resolve this, the authors propose a structure-preserving learning framework featuring two synergistic training strategies: direct vector-field learning and discrete-time evolution map learningâunifying model learning and numerical integration under a common geometric constraint for the first time. By integrating neural networks with degenerate variational integrators and enforcing canonical-invariance constraints, the method ensures dynamical consistency and numerical stability. Evaluated on strongly nonlinear physical systemsâincluding the guiding-center dynamicsâthe approach significantly improves long-term simulation accuracy and stability, overcoming the structural limitations of the prevailing âmodel-then-discretizeâ paradigm.
đ Abstract
This work focuses on learning non-canonical Hamiltonian dynamics from data, where long-term predictions require the preservation of structure both in the learned model and in numerical schemes. Previous research focused on either facet, respectively with a potential-based architecture and with degenerate variational integrators, but new issues arise when combining both. In experiments, the learnt model is sometimes numerically unstable due to the gauge dependency of the scheme, rendering long-time simulations impossible. In this paper, we identify this problem and propose two different training strategies to address it, either by directly learning the vector field or by learning a time-discrete dynamics through the scheme. Several numerical test cases assess the ability of the methods to learn complex physical dynamics, like the guiding center from gyrokinetic plasma physics.