Transformer-Based Contrastive Meta-Learning For Low-Resource Generalizable Activity Recognition

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe distribution shifts, high annotation costs, and poor generalization in cross-user and cross-scenario human activity recognition (HAR) under low-resource settings, this paper proposes a Transformer-oriented meta-optimization framework. The method innovatively integrates attention-based feature extraction with supervised contrastive loss to explicitly enforce distributional robustness during meta-training; it further introduces a virtual target domain synthesis strategy to enhance model adaptability to unseen domains in a data-efficient manner. Crucially, the approach operates without access to real target-domain labels, substantially reducing reliance on domain-specific supervision. Evaluated on multiple low-resource distribution-shift benchmarks, our method achieves an average accuracy improvement of 5.2% over state-of-the-art baselines—outperforming existing contrastive learning and meta-learning approaches in generalization capability. This work establishes a novel paradigm for scalable, resource-efficient HAR in heterogeneous real-world deployments.

Technology Category

Application Category

📝 Abstract
Deep learning has been widely adopted for human activity recognition (HAR) while generalizing a trained model across diverse users and scenarios remains challenging due to distribution shifts. The inherent low-resource challenge in HAR, i.e., collecting and labeling adequate human-involved data can be prohibitively costly, further raising the difficulty of tackling DS. We propose TACO, a novel transformer-based contrastive meta-learning approach for generalizable HAR. TACO addresses DS by synthesizing virtual target domains in training with explicit consideration of model generalizability. Additionally, we extract expressive feature with the attention mechanism of Transformer and incorporate the supervised contrastive loss function within our meta-optimization to enhance representation learning. Our evaluation demonstrates that TACO achieves notably better performance across various low-resource DS scenarios.
Problem

Research questions and friction points this paper is trying to address.

Resource-limited settings
Transformer models
Human activity recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

TACO
Transformer
Contrastive Learning
Junyao Wang
Junyao Wang
University of California, Irvine
Efficient AIAutonomous Driving
M
M. A. Faruque
Department of Electrical Engineering and Computer Science, University of California, Irvine, CA, United States