🤖 AI Summary
To address the safety challenge of detecting small, near-field obstacles beneath autonomous mobile robots (AMRs) in manufacturing environments, this paper proposes a novel three-layer near-field perception framework. The framework integrates optical interrupt sensing for rapid obstacle presence detection, laser stripe projection coupled with image-based geometric displacement analysis for millimeter-accurate height estimation, and an embedded lightweight YOLOv8s model for semantic-level obstacle classification. This paradigm pioneers the synergistic integration of multimodal optical sensing and edge intelligence, achieving real-time inference at 25–50 FPS on a Raspberry Pi 5 platform. Experimental results demonstrate a substantial improvement in small-obstacle detection rate over conventional LiDAR and ultrasonic approaches, with a 62% reduction in false positives. The solution achieves an optimal trade-off among accuracy, real-time performance, and cost-effectiveness, delivering a deployable, resource-efficient framework for safe near-field navigation of AMRs.
📝 Abstract
Near-field perception is essential for the safe operation of autonomous mobile robots (AMRs) in manufacturing environments. Conventional ranging sensors such as light detection and ranging (LiDAR) and ultrasonic devices provide broad situational awareness but often fail to detect small objects near the robot base. To address this limitation, this paper presents a three-tier near-field perception framework. The first approach employs light-discontinuity detection, which projects a laser stripe across the near-field zone and identifies interruptions in the stripe to perform fast, binary cutoff sensing for obstacle presence. The second approach utilizes light-displacement measurement to estimate object height by analyzing the geometric displacement of a projected stripe in the camera image, which provides quantitative obstacle height information with minimal computational overhead. The third approach employs a computer vision-based object detection model on embedded AI hardware to classify objects, enabling semantic perception and context-aware safety decisions. All methods are implemented on a Raspberry Pi 5 system, achieving real-time performance at 25 or 50 frames per second. Experimental evaluation and comparative analysis demonstrate that the proposed hierarchy balances precision, computation, and cost, thereby providing a scalable perception solution for enabling safe operations of AMRs in manufacturing environments.