π€ AI Summary
Existing mixed-precision training for Neural Ordinary Differential Equations (Neural ODEs) suffers from numerical instability, excessive memory consumption, and high computational overhead. This work presents the first reliable mixed-precision training framework tailored for Neural ODEs, integrating explicit ODE solvers with a customized backward pass. It introduces dynamic adjoint scaling and high-precision gradient accumulation to mitigate gradient errors and numerical divergence inherent in low-precision arithmetic. The framework employs FP16 for forward/backward computations and intermediate state storage, while retaining weights and accumulated gradients in FP32. We release RampDEβa PyTorch-based open-source library enabling plug-and-play deployment. Experiments on image classification and generation tasks demonstrate that our method reduces GPU memory usage by approximately 50%, achieves up to 2Γ speedup, and preserves full-precision model accuracy.
π Abstract
Exploiting low-precision computations has become a standard strategy in deep learning to address the growing computational costs imposed by ever larger models and datasets. However, naively performing all computations in low precision can lead to roundoff errors and instabilities. Therefore, mixed precision training schemes usually store the weights in high precision and use low-precision computations only for whitelisted operations. Despite their success, these principles are currently not reliable for training continuous-time architectures such as neural ordinary differential equations (Neural ODEs). This paper presents a mixed precision training framework for neural ODEs, combining explicit ODE solvers with a custom backpropagation scheme, and demonstrates its effectiveness across a range of learning tasks. Our scheme uses low-precision computations for evaluating the velocity, parameterized by the neural network, and for storing intermediate states, while stability is provided by a custom dynamic adjoint scaling and by accumulating the solution and gradients in higher precision. These contributions address two key challenges in training neural ODE: the computational cost of repeated network evaluations and the growth of memory requirements with the number of time steps or layers. Along with the paper, we publish our extendable, open-source PyTorch package rampde, whose syntax resembles that of leading packages to provide a drop-in replacement in existing codes. We demonstrate the reliability and effectiveness of our scheme using challenging test cases and on neural ODE applications in image classification and generative models, achieving approximately 50% memory reduction and up to 2x speedup while maintaining accuracy comparable to single-precision training.