🤖 AI Summary
Existing audio generation models struggle to produce immersive spatial audio that is geometrically aligned with visual content. This paper introduces the first end-to-end, zero-shot vision-driven spatial audio generation framework—requiring neither paired audio-visual data nor task-specific training—that synthesizes high-fidelity, azimuth-consistent spatial audio directly from arbitrary images or videos, including AI-generated content. Our method integrates visual object localization, monocular depth estimation, conditional monaural audio generation (via diffusion models or LLM-audio), sound source separation, and multi-source spatialization (using HRTF-based rendering or ambisonics). We demonstrate strong cross-domain generalization on unseen scenes, significantly enhancing immersion in VR/AR applications. Moreover, the framework supports real-time spatial audio synthesis from dynamic visual inputs. Experimental results validate its effectiveness in generating perceptually coherent, scene-adaptive 3D audio without fine-tuning.
📝 Abstract
Generating combined visual and auditory sensory experiences is critical for the consumption of immersive content. Recent advances in neural generative models have enabled the creation of high-resolution content across multiple modalities such as images, text, speech, and videos. Despite these successes, there remains a significant gap in the generation of high-quality spatial audio that complements generated visual content. Furthermore, current audio generation models excel in either generating natural audio or speech or music but fall short in integrating spatial audio cues necessary for immersive experiences. In this work, we introduce SEE-2-SOUND, a zero-shot approach that decomposes the task into (1) identifying visual regions of interest; (2) locating these elements in 3D space; (3) generating mono-audio for each; and (4) integrating them into spatial audio. Using our framework, we demonstrate compelling results for generating spatial audio for high-quality videos, images, and dynamic images from the internet, as well as media generated by learned approaches.