🤖 AI Summary
To address the dual challenges of cold-start (scarce labeled data) and limited interpretability in time-series classification, this paper proposes AdaptDTW—a novel adaptive and trainable framework. Methodologically, it reformulates the recurrence relation of Dynamic Time Warping (DTW) as a differentiable recurrent neural network—the first such formulation—and introduces a dynamic length-reduction algorithm to learn structure-preserving, interpretable prototypes directly from raw sequences. The model supports end-to-end training and smoothly adapts across resource regimes, from extremely low-label to fully supervised settings. Empirically, AdaptDTW achieves significant improvements over state-of-the-art methods under low-resource conditions on multiple benchmark datasets, while maintaining competitive accuracy in high-resource scenarios. Crucially, its decisions are inherently interpretable via prototype-based matching, enabling visualizable, post-hoc explanation. Thus, AdaptDTW unifies strong predictive performance, broad adaptability, and transparent interpretability within a single unified framework.
📝 Abstract
Neural networks have achieved remarkable success in time series classification, but their reliance on large amounts of labeled data for training limits their applicability in cold-start scenarios. Moreover, they lack interpretability, reducing transparency in decision-making. In contrast, dynamic time warping (DTW) combined with a nearest neighbor classifier is widely used for its effectiveness in limited-data settings and its inherent interpretability. However, as a non-parametric method, it is not trainable and cannot leverage large amounts of labeled data, making it less effective than neural networks in rich-resource scenarios. In this work, we aim to develop a versatile model that adapts to cold-start conditions and becomes trainable with labeled data, while maintaining interpretability. We propose a dynamic length-shortening algorithm that transforms time series into prototypes while preserving key structural patterns, thereby enabling the reformulation of the DTW recurrence relation into an equivalent recurrent neural network. Based on this, we construct a trainable model that mimics DTW's alignment behavior. As a neural network, it becomes trainable when sufficient labeled data is available, while still retaining DTW's inherent interpretability. We apply the model to several benchmark time series classification tasks and observe that it significantly outperforms previous approaches in low-resource settings and remains competitive in rich-resource settings.