ModernVBERT: Towards Smaller Visual Document Retrievers

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) for visual document retrieval suffer from reliance on large-scale VLM fine-tuning, resulting in limited performance and poor computational efficiency. To address this, we propose ModernVBERT—a compact 250M-parameter multimodal retrieval framework. Methodologically, it innovatively integrates high-resolution image inputs, cross-modal attention masking, modality-aligned data augmentation, and a late-interaction contrastive learning objective, departing from conventional end-to-end fine-tuning paradigms. Our key contribution is achieving superior retrieval performance with significantly reduced model size: on standard document retrieval benchmarks, ModernVBERT substantially outperforms state-of-the-art models with ten times more parameters, simultaneously improving accuracy and inference efficiency. The code and pretrained models are publicly available.

Technology Category

Application Category

📝 Abstract
Multimodal embedding models are gaining prevalence, notably for document retrieval as efficient alternatives to text-only pipelines. These models are typically built by finetuning large vision-language decoders (VLMs) with contrastive losses on text-image pairs. In this work, we show that, while cost-efficient, this repurposing approach often bottlenecks retrieval performance. Through controlled experiments, we establish a principled recipe for improving visual document retrieval models. We notably measure the impact of attention masking, image resolution, modality alignment data regimes, and late interaction centered contrastive objectives which emerge as central performance factors. Building on these insights, we release ModernVBERT, a compact 250M-parameter vision-language encoder that outperforms models up to 10 times larger when finetuned on document retrieval tasks. Models and code are made available at https://huggingface.co/ModernVBERT.
Problem

Research questions and friction points this paper is trying to address.

Optimizing compact multimodal embedding models for document retrieval
Addressing performance bottlenecks in vision-language model repurposing
Improving visual document retrieval through attention and modality alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact 250M-parameter vision-language encoder design
Attention masking and image resolution optimization techniques
Late interaction contrastive objectives for modality alignment
🔎 Similar Papers
No similar papers found.
P
Paul Teiletche
Illuin Technology, EPFL
Q
Quentin Macé
Illuin Technology, CentraleSupélec, Paris-Saclay
M
Max Conti
Illuin Technology
A
Antonio Loison
Illuin Technology
G
Gautier Viaud
Illuin Technology
Pierre Colombo
Pierre Colombo
CS of Equall & Ass. Prof @Univ ParisSacaly (CentraleSupelec)
NLPMultimodal
Manuel Faysse
Manuel Faysse
CentraleSupélec - Université Paris Saclay
Natural Language ProcessingMachine LearningPrivacy