🤖 AI Summary
To address the dual challenges of model lightweighting and cloud transmission latency in real-time fall detection on edge devices (EDs), this paper proposes a multi-tier mobile edge computing (MEC) architecture integrated with knowledge distillation. The architecture enables collaborative inference across frontend, edge, and cloud tiers, coupled with a dynamic task migration mechanism to enhance detection accuracy while maintaining low end-to-end latency. Knowledge distillation transfers representations from a high-performance cloud model to resource-constrained frontend models, jointly achieving model compression and performance improvement. Evaluated on the SisFall and FallAllD datasets, the method improves classification accuracy by 11.65% and 2.78%, respectively, and reduces end-to-end latency by 46.67% and 54.15%, significantly outperforming existing edge-deployed solutions.
📝 Abstract
The rising aging population has increased the importance of fall detection (FD) systems as an assistive technology, where deep learning techniques are widely applied to enhance accuracy. FD systems typically use edge devices (EDs) worn by individuals to collect real-time data, which are transmitted to a cloud center (CC) or processed locally. However, this architecture faces challenges such as a limited ED model size and data transmission latency to the CC. Mobile edge computing (MEC), which allows computations at MEC servers deployed between EDs and CC, has been explored to address these challenges. We propose a multilayer MEC (MLMEC) framework to balance accuracy and latency. The MLMEC splits the architecture into stations, each with a neural network model. If front-end equipment cannot detect falls reliably, data are transmitted to a station with more robust back-end computing. The knowledge distillation (KD) approach was employed to improve front-end detection accuracy by allowing high-power back-end stations to provide additional learning experiences, enhancing precision while reducing latency and processing loads. Simulation results demonstrate that the KD approach improved accuracy by 11.65% on the SisFall dataset and 2.78% on the FallAllD dataset. The MLMEC with KD also reduced the data latency rate by 54.15% on the FallAllD dataset and 46.67% on the SisFall dataset compared to the MLMEC without KD. In summary, the MLMEC FD system exhibits improved accuracy and reduced latency.