MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal time-series data from heterogeneous wearable sensors in digital health contain rich physiological information, yet supervised learning approaches are hindered by the scarcity of labeled data in clinical settings. To address this bottleneck, we propose the first cross-modal masked autoencoding framework tailored for multimodal health signals. Built upon the Transformer architecture, our method introduces a theoretically grounded cross-modal random masking strategy that jointly models intra-temporal dependencies and inter-modal correlations, enabling effective unsupervised representation learning. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods on both signal reconstruction and downstream health prediction tasksโ€”including disease risk assessment and activity recognition. To foster reproducibility and adoption, we fully open-source our implementation, pre-trained models, and standardized preprocessing tools.

Technology Category

Application Category

๐Ÿ“ Abstract
The growing prevalence of digital health technologies has led to the generation of complex multi-modal data, such as physical activity measurements simultaneously collected from various sensors of mobile and wearable devices. These data hold immense potential for advancing health studies, but current methods predominantly rely on supervised learning, requiring extensive labeled datasets that are often expensive or impractical to obtain, especially in clinical studies. To address this limitation, we propose a self-supervised learning framework called Multi-modal Cross-masked Autoencoder (MoCA) that leverages cross-modality masking and the Transformer autoencoder architecture to utilize both temporal correlations within modalities and cross-modal correlations between data streams. We also provide theoretical guarantees to support the effectiveness of the cross-modality masking scheme in MoCA. Comprehensive experiments and ablation studies demonstrate that our method outperforms existing approaches in both reconstruction and downstream tasks. We release open-source code for data processing, pre-training, and downstream tasks in the supplementary materials. This work highlights the transformative potential of self-supervised learning in digital health and multi-modal data.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of labeled multi-modal health data for supervised learning
Proposes self-supervised learning for temporal and cross-modal correlations
Improves reconstruction and downstream tasks in digital health
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised multi-modal learning framework
Cross-modality masking with Transformer autoencoder
Utilizes temporal and cross-modal correlations
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Howon Ryu
Division of Biostatistics and Bioinformatics, University of California San Diego, San Diego, CA, USA
Yuliang Chen
Yuliang Chen
University of California, San Diego
Self-Supervised LearningMultimodal Learning
Y
Yacun Wang
Department of Computer Science and Engineering, University of California San Diego, San Diego, CA, USA
A
Andrea Z LaCroix
Division of Epidemiology, University of California San Diego, San Diego, CA, USA
Chongzhi Di
Chongzhi Di
Professor of Biostatistics, Fred Hutchinson Cancer Center
functional datamultilevel modelsmeasurement error
Loki Natarajan
Loki Natarajan
Professor of Biostatistics and Bioinformatics, University of California San Diego
biostatisticsbioinformaticscomputational biology
Y
Yu Wang
Jingjing Zou
Jingjing Zou
University of California, San Diego
StatisticsBiostatistics