Smooth-Distill: A Self-distillation Framework for Multitask Learning with Wearable Sensor Data

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses key limitations of conventional knowledge distillation in multi-task learning (MTL) for wearable sensor data—specifically, human activity recognition (HAR) and sensor placement detection—including reliance on separate, computationally expensive teacher models and susceptibility to overfitting. We propose Smooth-Distill, a self-distillation framework that eliminates the need for external teachers by leveraging a momentum-updated historical smoothed version of the student model as its own teacher. Integrated into a unified CNN-based MTL architecture, Smooth-Distill introduces no additional parameters. Crucially, it jointly optimizes the self-distillation objective with the multi-task loss, enhancing training efficiency and convergence stability. Extensive experiments on multiple benchmark datasets demonstrate state-of-the-art performance in both HAR and sensor placement detection, with ~30% improvement in training efficiency. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
This paper introduces Smooth-Distill, a novel self-distillation framework designed to simultaneously perform human activity recognition (HAR) and sensor placement detection using wearable sensor data. The proposed approach utilizes a unified CNN-based architecture, MTL-net, which processes accelerometer data and branches into two outputs for each respective task. Unlike conventional distillation methods that require separate teacher and student models, the proposed framework utilizes a smoothed, historical version of the model itself as the teacher, significantly reducing training computational overhead while maintaining performance benefits. To support this research, we developed a comprehensive accelerometer-based dataset capturing 12 distinct sleep postures across three different wearing positions, complementing two existing public datasets (MHealth and WISDM). Experimental results show that Smooth-Distill consistently outperforms alternative approaches across different evaluation scenarios, achieving notable improvements in both human activity recognition and device placement detection tasks. This method demonstrates enhanced stability in convergence patterns during training and exhibits reduced overfitting compared to traditional multitask learning baselines. This framework contributes to the practical implementation of knowledge distillation in human activity recognition systems, offering an effective solution for multitask learning with accelerometer data that balances accuracy and training efficiency. More broadly, it reduces the computational cost of model training, which is critical for scenarios requiring frequent model updates or training on resource-constrained platforms. The code and model are available at https://github.com/Kuan2vn/smooth_distill.
Problem

Research questions and friction points this paper is trying to address.

Develops self-distillation for HAR and sensor placement detection
Reduces computational overhead via historical model as teacher
Improves accuracy and efficiency in multitask sensor data learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-distillation framework for multitask learning
Unified CNN architecture for dual-task processing
Smoothed historical model as teacher reduces overhead
🔎 Similar Papers
No similar papers found.
H
Hoang-Dieu Vu
Faculty of EEE, Phenikaa School of Engineering, Phenikaa University, Yen Nghia, Hanoi 12116, Vietnam; Graduate University of Science and Technology, VAST, Hanoi 122300, Vietnam
D
Duc-Nghia Tran
Institute of Information Technology, VAST, Hanoi 122300, Vietnam
Q
Quang-Tu Pham
Faculty of EEE, Phenikaa School of Engineering, Phenikaa University, Yen Nghia, Hanoi 12116, Vietnam
Hieu H. Pham
Hieu H. Pham
College of Engineering & Computer Science, VinUni-Illinois Smart Health Center, VinUniversity
AIComputer VisionDeep LearningMedical Image AnalysisComputational Bioimaging
N
Nicolas Vuillerme
AGEIS, Université Grenoble Alpes, Grenoble 38000, France; Institut Universitaire de France, Paris 75005, France
D
Duc-Tan Tran
Faculty of EEE, Phenikaa School of Engineering, Phenikaa University, Yen Nghia, Hanoi 12116, Vietnam