TransPL: VQ-Code Transition Matrices for Pseudo-Labeling of Time Series Unsupervised Domain Adaptation

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In unsupervised domain adaptation (UDA) for time-series data, conventional pseudo-labeling methods fail to capture temporal dynamics and channel-specific distribution shifts, resulting in low-quality pseudo-labels. To address this, we propose VQ-CTM: a novel framework that first learns a time-series block codebook via vector quantization; then constructs a joint class- and channel-level code transition matrix to explicitly model cross-domain code migration patterns and channel-wise shifts; and finally generates high-confidence, fully interpretable pseudo-labels through Bayesian posterior inference and channel-weighted class-conditional likelihood estimation. VQ-CTM is the first method to jointly model temporal code dynamics and channel-specificity, enabling natural support for weakly supervised UDA. Evaluated on four major time-series UDA benchmarks, it achieves average improvements of +6.1% in accuracy and +4.9% in F1-score over state-of-the-art pseudo-labeling approaches, while providing visually interpretable domain-shift analysis.

Technology Category

Application Category

📝 Abstract
Unsupervised domain adaptation (UDA) for time series data remains a critical challenge in deep learning, with traditional pseudo-labeling strategies failing to capture temporal patterns and channel-wise shifts between domains, producing sub-optimal pseudo-labels. As such, we introduce TransPL, a novel approach that addresses these limitations by modeling the joint distribution $P(mathbf{X}, y)$ of the source domain through code transition matrices, where the codes are derived from vector quantization (VQ) of time series patches. Our method constructs class- and channel-wise code transition matrices from the source domain and employs Bayes' rule for target domain adaptation, generating pseudo-labels based on channel-wise weighted class-conditional likelihoods. TransPL offers three key advantages: explicit modeling of temporal transitions and channel-wise shifts between different domains, versatility towards different UDA scenarios (e.g., weakly-supervised UDA), and explainable pseudo-label generation. We validate TransPL's effectiveness through extensive analysis on four time series UDA benchmarks and confirm that it consistently outperforms state-of-the-art pseudo-labeling methods by a strong margin (6.1% accuracy improvement, 4.9% F1 improvement), while providing interpretable insights into the domain adaptation process through its learned code transition matrices.
Problem

Research questions and friction points this paper is trying to address.

Addresses unsupervised domain adaptation for time series data
Overcomes limitations in capturing temporal patterns and channel shifts
Improves pseudo-labeling accuracy and interpretability in domain adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses VQ-code transition matrices for pseudo-labeling
Models joint distribution with Bayes' rule
Explicitly captures temporal and channel-wise shifts
🔎 Similar Papers
No similar papers found.
J
Jaeho Kim
Artificial Intelligence Graduate School, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea.
Seulki Lee
Seulki Lee
Associate Professor of Computer Science, UNIST
Embedded Artificial IntelligenceMachine LearningMobile ComputingCyber-Physical Systems