🤖 AI Summary
This paper introduces a novel paradigm for immersive 3D world generation from a single image—without requiring large-scale training. Addressing the single-image-to-3D-environment reconstruction task, the method proceeds in two stages: first, leveraging a pre-trained diffusion model to synthesize geometrically coherent panoramic images; second, elevating these to 3D via a metric depth estimator and performing 2D inpainting of occluded regions conditioned on rendered point clouds. The core contribution lies in reformulating single-image 3D generation as an in-context learning problem—explicitly modeling 3D structure while bypassing error accumulation inherent in video-synthesis-based approaches. Evaluated on both synthetic and real-world images, the framework produces VR-ready, high-fidelity 3D environments, consistently outperforming state-of-the-art video-synthesis methods across standard metrics including FID, LPIPS, and SSIM.
📝 Abstract
We introduce a recipe for generating immersive 3D worlds from a single image by framing the task as an in-context learning problem for 2D inpainting models. This approach requires minimal training and uses existing generative models. Our process involves two steps: generating coherent panoramas using a pre-trained diffusion model and lifting these into 3D with a metric depth estimator. We then fill unobserved regions by conditioning the inpainting model on rendered point clouds, requiring minimal fine-tuning. Tested on both synthetic and real images, our method produces high-quality 3D environments suitable for VR display. By explicitly modeling the 3D structure of the generated environment from the start, our approach consistently outperforms state-of-the-art, video synthesis-based methods along multiple quantitative image quality metrics. Project Page: https://katjaschwarz.github.io/worlds/