RSRNav: Reasoning Spatial Relationship for Image-Goal Navigation

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image-goal navigation (ImageNav) suffers from weak directional semantics and poor generalization under viewpoint variations. To address these challenges, we propose a spatial-relation-aware navigation framework that models fine-grained spatial correspondences between the goal image and current observations—moving beyond isolated semantic feature extraction. Our key contributions are: (1) a direction-aware cross-correlation module that explicitly encodes pixel-level azimuthal offsets; (2) a progressive relation refinement mechanism that iteratively optimizes spatial relationship representations in latent space; and (3) an end-to-end policy network grounded solely on relational features, eliminating the need for explicit localization or geometric priors. Evaluated on AI2THOR, RoboThor, and Habitat benchmarks, our method achieves state-of-the-art performance across all settings, with substantial success rate gains under user-matching protocols. It demonstrates strong viewpoint robustness and practical deployment potential.

Technology Category

Application Category

📝 Abstract
Recent image-goal navigation (ImageNav) methods learn a perception-action policy by separately capturing semantic features of the goal and egocentric images, then passing them to a policy network. However, challenges remain: (1) Semantic features often fail to provide accurate directional information, leading to superfluous actions, and (2) performance drops significantly when viewpoint inconsistencies arise between training and application. To address these challenges, we propose RSRNav, a simple yet effective method that reasons spatial relationships between the goal and current observations as navigation guidance. Specifically, we model the spatial relationship by constructing correlations between the goal and current observations, which are then passed to the policy network for action prediction. These correlations are progressively refined using fine-grained cross-correlation and direction-aware correlation for more precise navigation. Extensive evaluation of RSRNav on three benchmark datasets demonstrates superior navigation performance, particularly in the"user-matched goal"setting, highlighting its potential for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Semantic features lack directional accuracy in ImageNav
Viewpoint inconsistency reduces navigation performance
Need for spatial reasoning between goal and observations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models spatial relationships for navigation guidance
Uses fine-grained cross-correlation for precision
Incorporates direction-aware correlation refinement
🔎 Similar Papers
No similar papers found.
Z
Zheng Qin
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China
L
Le Wang
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China
Yabing Wang
Yabing Wang
Xi’an Jiaotong University
multimodal learning
Sanping Zhou
Sanping Zhou
Xi'an Jiaotong University
Computer VisionMachine Learning
Gang Hua
Gang Hua
Director of Applied Science, AI, Amazon.com, Inc., IEEE & IAPR Fellow
Computer VisionMachine LearningArtificial IntelligenceRoboticsMultimedia
W
Wei Tang
Department of Computer Science, University of Illinois, Chicago, IL 60607, USA