🤖 AI Summary
This work addresses model scaling for human activity recognition (HAR) in wearable multimodal sensing. We establish and empirically validate, for the first time in HAR, scaling laws linking pretraining dataset size and model parameter count. Using a Transformer architecture, we conduct large-scale grid search experiments on UCI HAR, WISDM Phone, and WISDM Watch datasets under a self-supervised pretraining and supervised fine-tuning paradigm. Key findings: scaling the number of users yields substantially greater performance gains than increasing per-user data volume, highlighting the critical role of inter-user diversity—contradicting prior assumptions in self-supervised HAR about data scaling. We observe strong power-law relationships between pretraining loss and both data volume and parameter count; downstream HAR accuracy scales correspondingly. These results provide quantifiable, reproducible scaling guidelines for designing efficient HAR models.
📝 Abstract
Many deep architectures and self-supervised pre-training techniques have been proposed for human activity recognition (HAR) from wearable multimodal sensors. Scaling laws have the potential to help move towards more principled design by linking model capacity with pre-training data volume. Yet, scaling laws have not been established for HAR to the same extent as in language and vision. By conducting an exhaustive grid search on both amount of pre-training data and Transformer architectures, we establish the first known scaling laws for HAR. We show that pre-training loss scales with a power law relationship to amount of data and parameter count and that increasing the number of users in a dataset results in a steeper improvement in performance than increasing data per user, indicating that diversity of pre-training data is important, which contrasts to some previously reported findings in self-supervised HAR. We show that these scaling laws translate to downstream performance improvements on three HAR benchmark datasets of postures, modes of locomotion and activities of daily living: UCI HAR and WISDM Phone and WISDM Watch. Finally, we suggest some previously published works should be revisited in light of these scaling laws with more adequate model capacities.