🤖 AI Summary
Image-goal navigation (ImageNav) suffers from weak directional semantics and poor generalization under viewpoint variations. To address these challenges, we propose a spatial-relation-aware navigation framework that models fine-grained spatial correspondences between the goal image and current observations—moving beyond isolated semantic feature extraction. Our key contributions are: (1) a direction-aware cross-correlation module that explicitly encodes pixel-level azimuthal offsets; (2) a progressive relation refinement mechanism that iteratively optimizes spatial relationship representations in latent space; and (3) an end-to-end policy network grounded solely on relational features, eliminating the need for explicit localization or geometric priors. Evaluated on AI2THOR, RoboThor, and Habitat benchmarks, our method achieves state-of-the-art performance across all settings, with substantial success rate gains under user-matching protocols. It demonstrates strong viewpoint robustness and practical deployment potential.
📝 Abstract
Recent image-goal navigation (ImageNav) methods learn a perception-action policy by separately capturing semantic features of the goal and egocentric images, then passing them to a policy network. However, challenges remain: (1) Semantic features often fail to provide accurate directional information, leading to superfluous actions, and (2) performance drops significantly when viewpoint inconsistencies arise between training and application. To address these challenges, we propose RSRNav, a simple yet effective method that reasons spatial relationships between the goal and current observations as navigation guidance. Specifically, we model the spatial relationship by constructing correlations between the goal and current observations, which are then passed to the policy network for action prediction. These correlations are progressively refined using fine-grained cross-correlation and direction-aware correlation for more precise navigation. Extensive evaluation of RSRNav on three benchmark datasets demonstrates superior navigation performance, particularly in the"user-matched goal"setting, highlighting its potential for real-world applications.