SAILViT: Towards Robust and Generalizable Visual Backbones for MLLMs via Gradual Feature Refinement

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision transformers (ViTs) employed as visual backbones in multimodal large language models (MLLMs) suffer from two key challenges: parameter initialization conflicts and the visual–language semantic gap, hindering effective end-to-end co-training. To address these, we propose a progressive feature refinement mechanism that jointly enhances visual representations and world knowledge through three synergistic components: (i) multi-stage feature alignment—from coarse to fine granularity; (ii) cross-modal contrastive learning; and (iii) connector co-optimization. Our method is fully compatible with standard ViT architectures and requires no modification to the backbone. Evaluated on the OpenCompass benchmark, it consistently improves performance across diverse downstream tasks. Notably, it demonstrates strong robustness and generalization across varying model scales, architectural variants, and data regimes. This work provides a generic, efficient adaptation framework for integrating ViT-based visual backbones into MLLMs.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) are essential as foundation backbones in establishing the visual comprehension capabilities of Multimodal Large Language Models (MLLMs). Although most ViTs achieve impressive performance through image-text pair-based contrastive learning or self-supervised mechanisms, they struggle to engage in connector-based co-training directly with LLMs due to potential parameter initialization conflicts and modality semantic gaps. To address the above challenges, this paper proposes SAILViT, a gradual feature learning-enhanced ViT for facilitating MLLMs to break through performance bottlenecks in complex multimodal interactions. SAILViT achieves coarse-to-fine-grained feature alignment and world knowledge infusion with gradual feature refinement, which better serves target training demands. We perform thorough empirical analyses to confirm the powerful robustness and generalizability of SAILViT across different dimensions, including parameter sizes, model architectures, training strategies, and data scales. Equipped with SAILViT, existing MLLMs show significant and consistent performance improvements on the OpenCompass benchmark across extensive downstream tasks. SAILViT series models are released at https://huggingface.co/BytedanceDouyinContent.
Problem

Research questions and friction points this paper is trying to address.

Addressing parameter conflicts in ViT-LLM co-training
Enhancing multimodal interaction via gradual feature refinement
Improving robustness across architectures and data scales
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradual feature refinement for robust alignment
Coarse-to-fine-grained feature learning enhancement
Compatible training with MLLMs via semantic bridging
🔎 Similar Papers
No similar papers found.