🤖 AI Summary
Cross-user electromyography (EMG) gesture recognition suffers from poor model generalizability due to inter-subject physiological and behavioral variability, hindering real-world deployment. To address this, we propose a fully unsupervised, source-data-free personalization framework that enables efficient adaptation to unseen users via a two-stage self-adaptive strategy. First, sequence-level cross-view contrastive learning disentangles shared and subject-specific features. Second, unsupervised fine-tuning is performed using high-confidence pseudo-labels generated on target-domain data. Crucially, no labeled or unlabeled source-domain data—nor any target-domain annotations—is required. Extensive experiments on multiple benchmark datasets demonstrate consistent improvements of ≥2.0% in accuracy over state-of-the-art methods. To our knowledge, this is the first approach achieving high robustness, scalability, and complete unsupervision in cross-user EMG gesture recognition.
📝 Abstract
Cross-user electromyography (EMG)-based gesture recognition represents a fundamental challenge in achieving scalable and personalized human-machine interaction within real-world applications. Despite extensive efforts, existing methodologies struggle to generalize effectively across users due to the intrinsic biological variability of EMG signals, resulting from anatomical heterogeneity and diverse task execution styles. To address this limitation, we introduce EMG-UP, a novel and effective framework for Unsupervised Personalization in cross-user gesture recognition. The proposed framework leverages a two-stage adaptation strategy: (1) Sequence-Cross Perspective Contrastive Learning, designed to disentangle robust and user-specific feature representations by capturing intrinsic signal patterns invariant to inter-user variability, and (2) Pseudo-Label-Guided Fine-Tuning, which enables model refinement for individual users without necessitating access to source domain data. Extensive evaluations show that EMG-UP achieves state-of-the-art performance, outperforming prior methods by at least 2.0% in accuracy.