Capybara-OMNI: An Efficient Paradigm for Building Omni-Modal Language Models

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high development cost and poor generalization of multimodal large language models (MLLMs), this paper proposes a lightweight and efficient quad-modal (text/image/video/audio) understanding framework. Methodologically: (i) we design modular multimodal adapters alongside a unified tokenization scheme and modality-specific encoders; (ii) we introduce a novel progressive training paradigm for all modalities, integrating cross-modal alignment distillation, instruction tuning, and dialogue reinforcement; (iii) we construct a high-quality multimodal dataset and propose a dedicated evaluation benchmark. Contributions include: (1) state-of-the-art multimodal understanding performance at comparable parameter counts; (2) a Chat variant that significantly enhances real-time interactive capabilities; and (3) full open-sourcing of model weights, partial data, and inference code on GitHub, enabling complete reproducibility.

Technology Category

Application Category

📝 Abstract
With the development of Multimodal Large Language Models (MLLMs), numerous outstanding accomplishments have emerged within the open-source community. Due to the complexity of creating and training multimodal data pairs, it is still a computational and time-consuming process to build powerful MLLMs. In this work, we introduce Capybara-OMNI, an MLLM that trains in a lightweight and efficient manner and supports understanding text, image, video, and audio modalities. We present in detail the framework design, the data construction, and the training recipe, to develop an MLLM step-by-step to obtain competitive performance. We also provide exclusive benchmarks utilized in our experiments to show how to properly verify understanding capabilities across different modalities. Results show that by following our guidance, we can efficiently build an MLLM that achieves competitive performance among models of the same scale on various multimodal benchmarks. Additionally, to enhance the multimodal instruction following and conversational capabilities of the model, we further discuss how to train the chat version upon an MLLM understanding model, which is more in line with user habits for tasks like real-time interaction with humans. We publicly disclose the Capybara-OMNI model, along with its chat-based version. The disclosure includes both the model weights, a portion of the training data, and the inference codes, which are made available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Efficiently building lightweight omni-modal language models
Improving multimodal understanding across text, image, video, audio
Enhancing instruction-following and conversational capabilities for MLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight efficient training for omni-modal MLLMs
Detailed framework design and data construction
Multimodal benchmarks for performance verification
🔎 Similar Papers
No similar papers found.
X
Xingguang Ji
Kuaishou Technology
J
Jiakang Wang
Kuaishou Technology
Hongzhi Zhang
Hongzhi Zhang
Professor of Computer Science and Technology, Harbin Institute of Technology
Deep LearningArtificial IntelligenceComputer Vision
J
Jingyuan Zhang
Kuaishou Technology
Haonan Zhou
Haonan Zhou
HKU Business School
C
Chenxi Sun
Kuaishou Technology
Y
Yahui Liu
Kuaishou Technology
Q
Qi Wang
Kuaishou Technology
F
Fuzheng Zhang
Kuaishou Technology