🤖 AI Summary
In unsupervised domain adaptation (UDA) for time-series data, conventional pseudo-labeling methods fail to capture temporal dynamics and channel-specific distribution shifts, resulting in low-quality pseudo-labels. To address this, we propose VQ-CTM: a novel framework that first learns a time-series block codebook via vector quantization; then constructs a joint class- and channel-level code transition matrix to explicitly model cross-domain code migration patterns and channel-wise shifts; and finally generates high-confidence, fully interpretable pseudo-labels through Bayesian posterior inference and channel-weighted class-conditional likelihood estimation. VQ-CTM is the first method to jointly model temporal code dynamics and channel-specificity, enabling natural support for weakly supervised UDA. Evaluated on four major time-series UDA benchmarks, it achieves average improvements of +6.1% in accuracy and +4.9% in F1-score over state-of-the-art pseudo-labeling approaches, while providing visually interpretable domain-shift analysis.
📝 Abstract
Unsupervised domain adaptation (UDA) for time series data remains a critical challenge in deep learning, with traditional pseudo-labeling strategies failing to capture temporal patterns and channel-wise shifts between domains, producing sub-optimal pseudo-labels. As such, we introduce TransPL, a novel approach that addresses these limitations by modeling the joint distribution $P(mathbf{X}, y)$ of the source domain through code transition matrices, where the codes are derived from vector quantization (VQ) of time series patches. Our method constructs class- and channel-wise code transition matrices from the source domain and employs Bayes' rule for target domain adaptation, generating pseudo-labels based on channel-wise weighted class-conditional likelihoods. TransPL offers three key advantages: explicit modeling of temporal transitions and channel-wise shifts between different domains, versatility towards different UDA scenarios (e.g., weakly-supervised UDA), and explainable pseudo-label generation. We validate TransPL's effectiveness through extensive analysis on four time series UDA benchmarks and confirm that it consistently outperforms state-of-the-art pseudo-labeling methods by a strong margin (6.1% accuracy improvement, 4.9% F1 improvement), while providing interpretable insights into the domain adaptation process through its learned code transition matrices.