🤖 AI Summary
To address the limited immersion and insufficient personalization of static 360° panoramas, this paper proposes the first end-to-end, text-driven framework for generating virtual tours. The method integrates multimodal image generation—specifically diffusion-based 360° panorama editing—with large language model (LLM)-guided semantic understanding and VR-enabled multisensory feedback, including haptic interaction, enabling zero-shot style transfer. Given textual prompts, the system dynamically redecorates the original panoramic space in real time while preserving geometric consistency, yielding photorealistic, stylistically coherent, and detail-rich virtual environments. Experimental evaluations demonstrate substantial improvements in user immersion and engagement, achieving efficient, controllable, and real-time personalized virtual touring across diverse indoor scenes.
📝 Abstract
MetaDecorator, is a framework that empowers users to personalize virtual spaces. By leveraging text-driven prompts and image synthesis techniques, MetaDecorator adorns static panoramas captured by 360{deg} imaging devices, transforming them into uniquely styled and visually appealing environments. This significantly enhances the realism and engagement of virtual tours compared to traditional offerings. Beyond the core framework, we also discuss the integration of Large Language Models (LLMs) and haptics in the VR application to provide a more immersive experience.