Amortized Predictability-aware Training Framework for Time Series Forecasting and Classification

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge posed by low-predictability samples in time series data, which often lead to unstable training dynamics or convergence to suboptimal solutions. To mitigate this issue, the authors propose a general Amortized Predictability-aware Training Framework (APTF) that dynamically identifies low-predictability instances and adaptively reweights their contribution to the loss. The framework incorporates a Hierarchical Predictability-aware Loss (HPL), which progressively intensifies penalties on hard-to-predict samples during training, and an amortized predictability estimation module that effectively reduces estimation errors caused by model bias. Empirical results demonstrate that the proposed approach significantly enhances both performance and training stability across time series forecasting and classification tasks.

Technology Category

Application Category

📝 Abstract
Time series data are prone to noise in various domains, and training samples may contain low-predictability patterns that deviate from the normal data distribution, leading to training instability or convergence to poor local minima. Therefore, mitigating the adverse effects of low-predictability samples is crucial for time series analysis tasks such as time series forecasting (TSF) and time series classification (TSC). While many deep learning models have achieved promising performance, few consider how to identify and penalize low-predictability samples to improve model performance from the training perspective. To fill this gap, we propose a general Amortized Predictability-aware Training Framework (APTF) for both TSF and TSC. APTF introduces two key designs that enable the model to focus on high-predictability samples while still learning appropriately from low-predictability ones: (i) a Hierarchical Predictability-aware Loss (HPL) that dynamically identifies low-predictability samples and progressively expands their loss penalty as training evolves, and (ii) an amortization model that mitigates predictability estimation errors caused by model bias, further enhancing HPL's effectiveness. The code is available at https://github.com/Meteor-Stars/APTF.
Problem

Research questions and friction points this paper is trying to address.

time series forecasting
time series classification
low-predictability samples
training instability
noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Amortized Predictability-aware Training
Hierarchical Predictability-aware Loss
Time Series Forecasting
Time Series Classification
Low-predictability Sample Mitigation
🔎 Similar Papers
No similar papers found.
X
Xu Zhang
Shanghai Key Laboratory of Data Science, College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China
Peng Wang
Peng Wang
Professor, Computer Science, Fudan University
DatabaseData mining
Y
Yichen Li
Department of Electrical and Computer Engineering, University of British Columbia (UBC), Vancouver, Canada
W
Wei Wang
Shanghai Key Laboratory of Data Science, College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China