🤖 AI Summary
This work addresses the prevailing limitation of existing text-to-video diffusion models in generating high-quality, synchronized audio that aligns semantically, emotionally, and atmospherically with the visual content. To this end, we propose a unified audio-visual generative foundation model featuring an asymmetric dual-stream Transformer architecture—comprising a 14B-parameter video stream and a 5B-parameter audio stream. The model leverages modality-aware classifier-free guidance (CFG), cross-modal AdaLN, and bidirectional audio-visual cross-attention mechanisms to achieve efficient co-generation and precise temporal alignment. Integrated with temporal positional encoding and a multilingual text encoder, our approach achieves state-of-the-art audio-visual quality and prompt fidelity within an open-source framework, matching the performance of closed-source counterparts while significantly reducing computational overhead and inference latency.
📝 Abstract
Recent text-to-video diffusion models can generate compelling video sequences, yet they remain silent -- missing the semantic, emotional, and atmospheric cues that audio provides. We introduce LTX-2, an open-source foundational model capable of generating high-quality, temporally synchronized audiovisual content in a unified manner. LTX-2 consists of an asymmetric dual-stream transformer with a 14B-parameter video stream and a 5B-parameter audio stream, coupled through bidirectional audio-video cross-attention layers with temporal positional embeddings and cross-modality AdaLN for shared timestep conditioning. This architecture enables efficient training and inference of a unified audiovisual model while allocating more capacity for video generation than audio generation. We employ a multilingual text encoder for broader prompt understanding and introduce a modality-aware classifier-free guidance (modality-CFG) mechanism for improved audiovisual alignment and controllability. Beyond generating speech, LTX-2 produces rich, coherent audio tracks that follow the characters, environment, style, and emotion of each scene -- complete with natural background and foley elements. In our evaluations, the model achieves state-of-the-art audiovisual quality and prompt adherence among open-source systems, while delivering results comparable to proprietary models at a fraction of their computational cost and inference time. All model weights and code are publicly released.