🤖 AI Summary
Addressing the challenge of simultaneously performing motion planning and perception for high-degree-of-freedom (DoF) robots in dynamic environments, this paper proposes a perception-quality-guided real-time motion planning framework. The method integrates a neural surrogate model for online perception score estimation into the robot’s closed-loop planning pipeline for the first time, and couples it with GPU-accelerated probabilistic roadmap (PS-PRM) sampling to enable efficient replanning. It further unifies dynamic environment modeling with real-time perception feedback to jointly optimize navigation feasibility and perceptual effectiveness. Evaluations in simulation and on physical robotic platforms demonstrate that the approach significantly improves task success rate and perception quality—both in static and dynamic scenarios—while maintaining strong robustness and real-time performance (average replanning latency < 50 ms). The framework is validated in complex human-robot coexistence settings, including home and hospital environments.
📝 Abstract
In this work, we address the problem of planning robot motions for a high-degree-of-freedom (DoF) robot that effectively achieves a given perception task while the robot and the perception target move in a dynamic environment. Achieving navigation and perception tasks simultaneously is challenging, as these objectives often impose conflicting requirements. Existing methods that compute motion under perception constraints fail to account for obstacles, are designed for low-DoF robots, or rely on simplified models of perception. Furthermore, in dynamic real-world environments, robots must replan and react quickly to changes and directly evaluating the quality of perception (e.g., object detection confidence) is often expensive or infeasible at runtime. This problem is especially important in human-centered environments such as homes and hospitals, where effective perception is essential for safe and reliable operation. To address these challenges, we propose a GPU-parallelized perception-score-guided probabilistic roadmap planner with a neural surrogate model (PS-PRM). The planner explicitly incorporates the estimated quality of a perception task into motion planning for high-DoF robots. Our method uses a learned model to approximate perception scores and leverages GPU parallelism to enable efficient online replanning in dynamic settings. We demonstrate that our planner, evaluated on high-DoF robots, outperforms baseline methods in both static and dynamic environments in both simulation and real-robot experiments.