RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization of wearable motion data models on downstream tasks with limited labeled samples, this paper introduces the first self-supervised foundation model specifically designed for wearable motion data. Methodologically, it employs relative contrastive learning to model semantic similarity among accelerometer time series, proposes a novel learnable distance metric incorporating physical priors—including motion primitive similarity and rotational invariance—and adopts a softened contrastive loss. The model is pre-trained on an ultra-large-scale temporal dataset comprising 1 billion motion segments from 87,000 users. Experiments demonstrate substantial improvements in few-shot transfer performance across diverse classification and regression downstream tasks. This work provides the first systematic validation of cross-task generalization capability of self-supervised motion representations, establishing a universal, robust foundation for wearable health analytics.

Technology Category

Application Category

📝 Abstract
We present RelCon, a novel self-supervised *Rel*ative *Con*trastive learning approach that uses a learnable distance measure in combination with a softened contrastive loss for training an motion foundation model from wearable sensors. The learnable distance measure captures motif similarity and domain-specific semantic information such as rotation invariance. The learned distance provides a measurement of semantic similarity between a pair of accelerometer time-series segments, which is used to measure the distance between an anchor and various other sampled candidate segments. The self-supervised model is trained on 1 billion segments from 87,376 participants from a large wearables dataset. The model achieves strong performance across multiple downstream tasks, encompassing both classification and regression. To our knowledge, we are the first to show the generalizability of a self-supervised learning model with motion data from wearables across distinct evaluation tasks.
Problem

Research questions and friction points this paper is trying to address.

Wearable Devices
Action Recognition
Data Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

RelCon
Unsupervised Learning
Wearable Device Data
🔎 Similar Papers
No similar papers found.
M
Maxwell A. Xu
UIUC, Apple Inc.
Jaya Narain
Jaya Narain
Apple
Machine learningTime seriesHuman-centered designHealthAccessibility
G
Gregory Darnell
Apple Inc.
H
H. Hallgrímsson
Apple Inc.
H
Hyewon Jeong
Apple Inc., MIT
D
Darren Forde
Apple Inc.
R
Richard Fineman
Apple Inc.
K
Karthik J. Raghuram
Apple Inc.
James M. Rehg
James M. Rehg
Founder Professor of Computer Science, University of Illinois at Urbana-Champaign
computer visionroboticsmachine learninghuman-computer interactionparallel and distributed
S
Shirley Ren
Apple Inc.