Learning 3D Scene Analogies with Neural Contextual Scene Maps

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data-driven 3D scene understanding methods suffer from poor generalization to unseen or noisy environments, hindering robust adaptation. Method: This paper introduces “3D Scene Analogy” — a novel paradigm that models smooth spatial mappings between corresponding regions across scenes, enabling trajectory transfer, long-horizon imitation learning, and context-aware object rearrangement. We formally define scene-level analogy mapping—surpassing conventional instance-level matching—and propose a neural contextual scene graph for semantic-geometric, coarse-to-fine full-scene alignment. Our approach integrates neural field-based descriptor extraction, multi-scale feature alignment, and end-to-end differentiable mapping estimation. Results: Evaluated on diverse indoor scenes, our method robustly identifies analogy relationships, significantly improving trajectory prediction accuracy and object layout transfer fidelity. We validate its effectiveness on AR/VR interaction and robot simulation tasks, demonstrating superior cross-scene generalization and functional adaptability.

Technology Category

Application Category

📝 Abstract
Understanding scene contexts is crucial for machines to perform tasks and adapt prior knowledge in unseen or noisy 3D environments. As data-driven learning is intractable to comprehensively encapsulate diverse ranges of layouts and open spaces, we propose teaching machines to identify relational commonalities in 3D spaces. Instead of focusing on point-wise or object-wise representations, we introduce 3D scene analogies, which are smooth maps between 3D scene regions that align spatial relationships. Unlike well-studied single instance-level maps, these scene-level maps smoothly link large scene regions, potentially enabling unique applications in trajectory transfer in AR/VR, long demonstration transfer for imitation learning, and context-aware object rearrangement. To find 3D scene analogies, we propose neural contextual scene maps, which extract descriptor fields summarizing semantic and geometric contexts, and holistically align them in a coarse-to-fine manner for map estimation. This approach reduces reliance on individual feature points, making it robust to input noise or shape variations. Experiments demonstrate the effectiveness of our approach in identifying scene analogies and transferring trajectories or object placements in diverse indoor scenes, indicating its potential for robotics and AR/VR applications.
Problem

Research questions and friction points this paper is trying to address.

Identify relational commonalities in 3D spaces
Enable trajectory transfer in AR/VR and imitation learning
Robustly align semantic and geometric contexts in noisy environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural contextual scene maps for 3D analogies
Coarse-to-fine alignment of semantic and geometric contexts
Robust to noise and shape variations in 3D scenes
🔎 Similar Papers
No similar papers found.
J
Junho Kim
Dept. of Electrical and Computer Engineering, Seoul National University
Gwangtak Bae
Gwangtak Bae
Seoul National University
Computer VisionRobotics
E
Eun Sun Lee
Dept. of Electrical and Computer Engineering, Seoul National University
Y
Young Min Kim
Dept. of Electrical and Computer Engineering, Seoul National University; Interdisciplinary Program in Artificial Intelligence and INMC, Seoul National University