🤖 AI Summary
To address the challenge of sustained exploration by ground robots in unknown underground environments—where terrain constraints severely impede mobility—this paper proposes a ground-air heterogeneous robot collaborative autonomous navigation framework. The method integrates occupancy grid and semantic maps into a multi-resolution hierarchical graph model that jointly encodes geometric/semantic traversability and frontier volumetric gain. A terrain-aware collaborative decision mechanism, grounded in a shared confidence metric, enables the ground robot to autonomously detect impassable regions and precisely trigger UAV-assisted obstacle bypass and continuation of exploration. This confidence-driven approach governs frontier selection and task allocation. Real-world experiments in underground environments demonstrate a 37% improvement in exploration coverage and a >92% success rate for obstacle-bypass-enabled continued exploration.
📝 Abstract
Autonomous navigation in unknown environments is a fundamental challenge in robotics, particularly in coordinating ground and aerial robots to maximize exploration efficiency. This paper presents a novel approach that utilizes a hierarchical graph to represent the environment, encoding both geometric and semantic traversability. The framework enables the robots to compute a shared confidence metric, which helps the ground robot assess terrain and determine when deploying the aerial robot will extend exploration. The robot's confidence in traversing a path is based on factors such as predicted volumetric gain, path traversability, and collision risk. A hierarchy of graphs is used to maintain an efficient representation of traversability and frontier information through multi-resolution maps. Evaluated in a real subterranean exploration scenario, the approach allows the ground robot to autonomously identify zones that are no longer traversable but suitable for aerial deployment. By leveraging this hierarchical structure, the ground robot can selectively share graph information on confidence-assessed frontier targets from parts of the scene, enabling the aerial robot to navigate beyond obstacles and continue exploration.