๐ค AI Summary
Existing visual object navigation methods suffer from two key limitations: (1) coupling search and path planning into a single stage with shared reward signals, leading to insufficient training or overfitting; and (2) relying on generic visual encoders that ignore depth and dynamic obstacle information, hindering effective policy learning. To address these issues, we propose a decoupled two-stage navigation framework: (i) a differentiated reward mechanism that separately optimizes target search coverage and path navigation accuracy; (ii) an RGB-D pre-trained depth-aware feature extractor integrated with an online-constructed obstacle map and semantic cues for multimodal state representation; and (iii) end-to-end joint optimization via two-stage reinforcement learning. Evaluated on AI2-Thor and RoboTHOR, our method achieves significant improvements over state-of-the-art approachesโhigher success rates, improved path efficiency, and substantially reduced collision and deadlock rates.
๐ Abstract
The task that requires an agent to navigate to a given object through only visual observation is called visual object navigation (VON). The main bottlenecks of VON are strategies exploration and prior knowledge exploitation. Traditional strategies exploration ignores the differences of searching and navigating stages, using the same reward in two stages, which reduces navigation performance and training efficiency. Our study enables the agent to explore larger area in searching stage and seek the optimal path in navigating stage, improving the success rate of navigation. Traditional prior knowledge exploitation focused on learning and utilizing object association, which ignored the depth and obstacle information in the environment. This paper uses the RGB and depth information of the training scene to pretrain the feature extractor, which improves navigation efficiency. The obstacle information is memorized by the agent during the navigation, reducing the probability of collision and deadlock. Depth, obstacle and other prior knowledge are concatenated and input into the policy network, and navigation actions are output under the training of two-stage rewards. We evaluated our method on AI2-Thor and RoboTHOR and demonstrated that it significantly outperforms state-of-the-art (SOTA) methods on success rate and navigation efficiency.