🤖 AI Summary
Real-world jerk data are scarce, and modeling abrupt acceleration changes under multi-condition torque demands remains challenging. Method: This paper introduces variational autoencoders (VAEs) to electric drivetrain jerk modeling for the first time, proposing a latent-space-engineering-based unconditional and conditional generative framework that integrates powertrain features across diverse operating conditions. The approach enables physically interpretable jerk signal generation without manual parameter calibration. Contribution/Results: Trained on experimental data and leveraging semantic disentanglement of latent variables, the model demonstrates effectiveness on real-world dual-configuration electric SUV measurements. Compared to physics-based and hybrid models, it achieves significantly improved generation diversity and dynamic fidelity while preserving physical plausibility and interpretability.
📝 Abstract
This work proposes variational autoencoders (VAEs) to predict a vehicle's jerk from a given torque demand, addressing the limitations of sparse real-world datasets. Specifically, we implement unconditional and conditional VAEs to generate jerk signals that integrate features from different drivetrain scenarios. The VAEs are trained on experimental data collected from two variants of a fully electric SUV, which differ in maximum torque delivery and drivetrain configuration. New meaningful jerk signals are generated within an engineering context through the interpretation of the VAE's latent space. A performance comparison with baseline physics-based and hybrid models confirms the effectiveness of the VAEs. We show that VAEs bypass the need for exhaustive manual system parametrization while maintaining physical plausibility by conditioning data generation on specific inputs.