TexAVi: Generating Stereoscopic VR Video Clips from Text Descriptions

📅 2024-10-19
🏛️ 2024 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI)
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-stereoscopic VR video generation lacks end-to-end solutions. Method: This paper introduces the first three-stage collaborative framework—integrating text-to-image diffusion (based on Stable Diffusion), monocular depth estimation (MiDaS), and temporal frame synthesis—to directly generate side-by-side stereoscopic VR videos from natural language. High-fidelity 2D frames are generated and then depth-guided to construct horizontal disparities; temporal modeling ensures spatiotemporal coherence in the output video. Evaluation employs CLIP Score and FID jointly to balance textual alignment and visual fidelity. Contribution/Results: Experiments validate the feasibility of the “text → 2D frames → stereoscopic VR video” paradigm, significantly reducing reliance on costly real-world capture and manual 3D modeling in VR content creation. This work bridges a critical gap in text-driven immersive video generation.

Technology Category

Application Category

📝 Abstract
While generative models such as text-to-image, large language models and text-to-video have seen significant progress, the extension to text-to-virtual-reality remains largely unexplored, due to a deficit in training data and the complexity of achieving realistic depth and motion in virtual environments. This paper proposes an approach to coalesce existing generative systems to form a stereoscopic virtual reality video from text. Carried out in three main stages, we start with a base text-to-image model that captures context from an input text. We then employ Stable Diffusion on the rudimentary image produced, to generate frames with enhanced realism and overall quality. These frames are processed with depth estimation algorithms to create left-eye and right-eye views, which are stitched side-by-side to create an immersive viewing experience. Such systems would be highly beneficial in virtual reality production, since filming and scene building often require extensive hours of work and post-production effort. We utilize image evaluation techniques, specifically Fréchet Inception Distance and CLIP Score, to assess the visual quality of frames produced for the video. These quantitative measures establish the proficiency of the proposed method. Our work highlights the exciting possibilities of using natural language-driven graphics in fields like virtual reality simulations.
Problem

Research questions and friction points this paper is trying to address.

3D virtual reality
text-to-video conversion
depth and motion rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

TexAVi
Stable Diffusion
Depth Estimation
🔎 Similar Papers
No similar papers found.
S
Shruti Jayaraman
Dept. of Computer Science and Engineering, College of Engineering, Guindy, Chennai, India
R
R. Bhavya
Dept. of Computer Science and Engineering, College of Engineering, Guindy, Chennai, India
Vriksha Srihari
Vriksha Srihari
Master's in Robotics, Georgia Institute of Technology
machine learninggenerative AIroboticsLLMsVLMs
V
V. Mary Anita Rajam
Dept. of Computer Science and Engineering, College of Engineering, Guindy, Chennai, India