Uni-X: Mitigating Modality Conflict with a Two-End-Separated Architecture for Unified Multimodal Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unified multimodal models (UMMs) built upon shared autoregressive Transformers suffer from severe cross-modal gradient conflicts—particularly in shallow and deep layers—due to fundamental statistical disparities between images and text, hindering training efficiency and semantic fusion. To address this, we propose Uni-X: an “X-shaped” architecture featuring modality-specific front-end encoders and back-end decoders for low-level feature processing, with only intermediate layers shared to enable high-level semantic alignment. This “separate-ends, shared-middle” design systematically mitigates cross-modal gradient interference for the first time, achieving a superior trade-off between parameter sharing and modality-specific learning. Experiments show that the 3B Uni-X model achieves 82 points on GenEval’s image generation benchmark, outperforms a 7B baseline in vision-language understanding, and significantly improves training efficiency.

Technology Category

Application Category

📝 Abstract
Unified Multimodal Models (UMMs) built on shared autoregressive (AR) transformers are attractive for their architectural simplicity. However, we identify a critical limitation: when trained on multimodal inputs, modality-shared transformers suffer from severe gradient conflicts between vision and text, particularly in shallow and deep layers. We trace this issue to the fundamentally different low-level statistical properties of images and text, while noting that conflicts diminish in middle layers where representations become more abstract and semantically aligned. To overcome this challenge, we propose Uni-X, a two-end-separated, middle-shared architecture. Uni-X dedicates its initial and final layers to modality-specific processing, while maintaining shared parameters in the middle layers for high-level semantic fusion. This X-shaped design not only eliminates gradient conflicts at both ends but also further alleviates residual conflicts in the shared layers. Extensive experiments validate the effectiveness of Uni-X. Under identical training conditions, Uni-X achieves superior training efficiency compared to strong baselines. When scaled to 3B parameters with larger training data, Uni-X matches or surpasses 7B AR-based UMMs, achieving a GenEval score of 82 for image generation alongside strong performance in text and vision understanding tasks. These results establish Uni-X as a parameter-efficient and scalable foundation for future unified multimodal modeling. Our code is available at https://github.com/CURRENTF/Uni-X
Problem

Research questions and friction points this paper is trying to address.

Mitigating gradient conflicts between vision and text modalities
Separating modality-specific processing in initial and final layers
Enabling parameter-efficient unified multimodal understanding and generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-end-separated architecture mitigates modality conflicts
Modality-specific processing in shallow and deep layers
Shared middle layers enable semantic fusion across modalities
🔎 Similar Papers
No similar papers found.
Jitai Hao
Jitai Hao
Harbin Institute of Technology (Shenzhen)
Efficient AIMLLMsLLMs
H
Hao Liu
Baidu Inc.
Xinyan Xiao
Xinyan Xiao
Baidu
Natural Language ProcessingStatistical Machine Translation
Q
Qiang Huang
The School of Intelligence Science and Engineering, Harbin Institute of Technology, Shenzhen
J
Jun Yu
The School of Intelligence Science and Engineering, Harbin Institute of Technology, Shenzhen,Pengcheng Laboratory