🤖 AI Summary
This work addresses the challenge of explaining and correcting prediction errors from black-box time-series models. We propose an interpretable forecasting correction framework based on local surrogate modeling. Its core innovation is the “parameter-difference explanation paradigm”: prediction errors are characterized by shifts in the parameters of a base model (e.g., ARIMA or ETS), and a surrogate model learns these errors to guide parameter refitting—rendering the correction process inherently interpretable. By transforming latent errors into observable, semantically meaningful parameter changes, our method simultaneously improves forecast accuracy and uncovers hidden dynamics, such as periodic disturbances or trend deviations. Extensive experiments on multiple benchmark datasets demonstrate consistent performance gains and yield actionable mechanistic insights, thereby bridging the gap between predictive power and model interpretability in time-series forecasting.
📝 Abstract
We introduce a local surrogate approach for explainable time-series forecasting. An initially non-interpretable predictive model to improve the forecast of a classical time-series 'base model' is used. 'Explainability' of the correction is provided by fitting the base model again to the data from which the error prediction is removed (subtracted), yielding a difference in the model parameters which can be interpreted. We provide illustrative examples to demonstrate the potential of the method to discover and explain underlying patterns in the data.