π€ AI Summary
This study addresses the selection of training strategies for neural networks modeling nonlinear dynamical systems, specifically comparing open-loop (parallel) versus closed-loop (series-parallel) training for long-horizon prediction. Empirical evaluation is conducted across five representative neural architectures on both an experimental pneumatic valve testbed and an industrial robot benchmark dataset, augmented by analysis grounded in classical system identification theory. Results consistently demonstrate that parallel training yields significantly higher long-term prediction accuracy than series-parallel training. Consequently, we propose parallel training as the default paradigm for dynamical system simulation. Moreover, this work establishes, for the first time, a formal theoretical linkage between deep learning training strategies and the system identification objective of simulation error minimization, thereby resolving longstanding terminological ambiguities in the literature.
π Abstract
Neural networks have become a widely adopted tool for modeling nonlinear dynamical systems from data. However, the choice of training strategy remains a key design decision, particularly for simulation tasks. This paper compares two predominant strategies: parallel and series-parallel training. The conducted empirical analysis spans five neural network architectures and two examples: a pneumatic valve test bench and an industrial robot benchmark. The study reveals that, even though series-parallel training dominates current practice, parallel training consistently yields better long-term prediction accuracy. Additionally, this work clarifies the often inconsistent terminology in the literature and relate both strategies to concepts from system identification. The findings suggest that parallel training should be considered the default training strategy for neural network-based simulation of dynamical systems.