🤖 AI Summary
This work proposes a lightweight, purely model-driven 4D millimeter-wave radar framework for human detection in challenging industrial and underground environments—such as those with dust, smoke, or metallic structures—where conventional optical and LiDAR-based perception often fails. Relying solely on radar modality, the system achieves real-time performance on embedded edge devices by innovatively integrating domain-aware multi-threshold filtering, ego-motion-compensated temporal accumulation, KD-tree-based Euclidean clustering, and Doppler information, followed by a rule-based 3D classifier that enables interpretable, learning-free, yet highly robust detection. Experimental results demonstrate consistent human detection in enclosed trailers and real mine tunnels where cameras and LiDAR systems fail, confirming the framework’s practicality and robustness under extremely low-visibility conditions.
📝 Abstract
Pervasive sensing in industrial and underground environments is severely constrained by airborne dust, smoke, confined geometry, and metallic structures, which rapidly degrade optical and LiDAR based perception. Elevation resolved 4D mmWave radar offers strong resilience to such conditions, yet there remains a limited understanding of how to process its sparse and anisotropic point clouds for reliable human detection in enclosed, visibility degraded spaces. This paper presents a fully model-driven 4D radar perception framework designed for real-time execution on embedded edge hardware. The system uses radar as its sole perception modality and integrates domain aware multi threshold filtering, ego motion compensated temporal accumulation, KD tree Euclidean clustering with Doppler aware refinement, and a rule based 3D classifier. The framework is evaluated in a dust filled enclosed trailer and in real underground mining tunnels, and in the tested scenarios the radar based detector maintains stable pedestrian identification as camera and LiDAR modalities fail under severe visibility degradation. These results suggest that the proposed model-driven approach provides robust, interpretable, and computationally efficient perception for safety-critical applications in harsh industrial and subterranean environments.