🤖 AI Summary
Existing AI models suffer performance degradation on non-stationary time series due to their implicit stationarity assumption; conventional preprocessing methods (e.g., differencing or STL decomposition) improve stationarity but distort intrinsic trend and seasonal structures, impairing temporal modeling capability. To address this, we propose the Decomposed Variational Autoencoder (D-VAE), the first framework that jointly designs STL-based time series decomposition and latent space arithmetic (LSA) operations within a variational autoencoder. D-VAE explicitly disentangles and preserves trend, seasonal, and residual components in the latent space, while enforcing stationarity exclusively on the residual component. Crucially, it eliminates the need for input-side preprocessing and learns stationary latent representations of non-stationary sequences end-to-end. Evaluated on two standard non-stationary benchmarks, downstream forecasting models built upon D-VAE’s latent representations achieve state-of-the-art RMSE across four distinct architectures, demonstrating superior generalizability and effectiveness.
📝 Abstract
AI models have garnered significant research attention towards predictive task automation. However, a stationary training environment is an underlying assumption for most models and such models simply do not work on non-stationary data since a stationary relationship is learned. The existing solutions propose making data stationary prior to model training and evaluation. This leads to loss of trend and seasonal patterns which are vital components for learning temporal dependencies of the system under study. This research aims to address this limitation by proposing a method for enforcing stationary behavior within the latent space while preserving trend and seasonal information. The method deploys techniques including Differencing, Time-series decomposition, and Latent Space Arithmetic (LSA), to learn information vital for efficient approximation of trend and seasonal information which is then stored as embeddings within the latent space of a Variational Autoencoder (VAE). The approach's ability to preserve trend and seasonal information was evaluated on two time-series non-stationary datasets. For predictive performance evaluation, four deep learning models were trained on the latent vector representations of the datasets after application of the proposed method and all models produced competitive results in comparison with state-of-the-art techniques using RMSE as the performance metric.