🤖 AI Summary
To address the challenge of balancing robustness and efficiency in human action recognition for home-based elderly care—under conditions of occlusion, sensor noise, and strict real-time constraints—this paper proposes RE-TCN, a novel temporal convolutional network. Methodologically, RE-TCN introduces two key innovations: (i) an Adaptive Temporal Weighting (ATW) mechanism that dynamically enhances the discriminability of critical frames; and (ii) a lightweight architecture integrating depthwise separable convolutions with multi-strategy data augmentation to jointly reduce computational overhead and improve noise/occlusion resilience. Evaluated on four standard benchmarks—NTU RGB+D 60, Northwestern-UCLA, SHREC’17, and DHG-14/28—RE-TCN achieves state-of-the-art performance, improving average accuracy by 1.8% while accelerating inference speed by 32%–57%. The source code is publicly available.
📝 Abstract
The growing ageing population and their preference to maintain independence by living in their own homes require proactive strategies to ensure safety and support. Ambient Assisted Living (AAL) technologies have emerged to facilitate ageing in place by offering continuous monitoring and assistance within the home. Within AAL technologies, action recognition plays a crucial role in interpreting human activities and detecting incidents like falls, mobility decline, or unusual behaviours that may signal worsening health conditions. However, action recognition in practical AAL applications presents challenges, including occlusions, noisy data, and the need for real-time performance. While advancements have been made in accuracy, robustness to noise, and computation efficiency, achieving a balance among them all remains a challenge. To address this challenge, this paper introduces the Robust and Efficient Temporal Convolution network (RE-TCN), which comprises three main elements: Adaptive Temporal Weighting (ATW), Depthwise Separable Convolutions (DSC), and data augmentation techniques. These elements aim to enhance the model's accuracy, robustness against noise and occlusion, and computational efficiency within real-world AAL contexts. RE-TCN outperforms existing models in terms of accuracy, noise and occlusion robustness, and has been validated on four benchmark datasets: NTU RGB+D 60, Northwestern-UCLA, SHREC'17, and DHG-14/28. The code is publicly available at: https://github.com/Gbouna/RE-TCN