🤖 AI Summary
Existing video compression methods fail to effectively leverage the semantic representation capabilities of multimodal large language models (MLLMs), resulting in a fundamental trade-off between compression efficiency and semantic fidelity. To address this, we propose Cross-Modal Video Coding (CMVC), a novel paradigm that first disentangles video into spatial content and motion components; then constructs an extremely compact cross-modal representation grounded in MLLMs; and finally introduces a generative reconstruction framework supporting both text-to-video (TT2V) and image-to-video (IT2V) decoding modes. A lightweight LoRA-finetuned frame interpolation module is integrated for efficient motion modeling. Experiments demonstrate that TT2V significantly improves semantic reconstruction efficiency, while IT2V achieves state-of-the-art perceptual quality. This work constitutes the first successful integration of explicitly embedded MLLMs into a video coding architecture, validating both its effectiveness and feasibility.
📝 Abstract
Existing codecs are designed to eliminate intrinsic redundancies to create a compact representation for compression. However, strong external priors from Multimodal Large Language Models (MLLMs) have not been explicitly explored in video compression. Herein, we introduce a unified paradigm for Cross-Modality Video Coding (CMVC), which is a pioneering approach to explore multimodality representation and video generative models in video coding. Specifically, on the encoder side, we disentangle a video into spatial content and motion components, which are subsequently transformed into distinct modalities to achieve very compact representation by leveraging MLLMs. During decoding, previously encoded components and video generation models are leveraged to create multiple encoding-decoding modes that optimize video reconstruction quality for specific decoding requirements, including Text-Text-to-Video (TT2V) mode to ensure high-quality semantic information and Image-Text-to-Video (IT2V) mode to achieve superb perceptual consistency. In addition, we propose an efficient frame interpolation model for IT2V mode via Low-Rank Adaption (LoRA) tuning to guarantee perceptual quality, which allows the generated motion cues to behave smoothly. Experiments on benchmarks indicate that TT2V achieves effective semantic reconstruction, while IT2V exhibits competitive perceptual consistency. These results highlight potential directions for future research in video coding.