MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual vision-language (VL) benchmarks cover few languages (typically <10), heavily favor high-resource languages, and lack systematic evaluation of VL capabilities for low-resource languages. To address this, we introduce MVL-SIB—the first cross-modal thematic matching benchmark spanning 205 languages, supporting both VL and monolingual text-only evaluation. Our methodology includes multilingual text embedding alignment, cross-modal similarity modeling, and a standardized evaluation protocol, applied uniformly across open-source LVLMs and GPT-4o-mini. Key findings: (1) LVLMs exhibit significantly faster degradation in cross-modal performance than in textual understanding for low-resource languages; (2) multi-image input yields no consistent improvement; (3) models perform near-randomly on severely under-resourced languages such as N’Koo. MVL-SIB constitutes the most rigorous and linguistically comprehensive probe for multilingual VL understanding to date.

Technology Category

Application Category

📝 Abstract
Existing multilingual vision-language (VL) benchmarks often only cover a handful of languages. Consequently, evaluations of large vision-language models (LVLMs) predominantly target high-resource languages, underscoring the need for evaluation data for low-resource languages. To address this limitation, we introduce MVL-SIB, a massively multilingual vision-language benchmark that evaluates both cross-modal and text-only topical matching across 205 languages -- over 100 more than the most multilingual existing VL benchmarks encompass. We then benchmark a range of of open-weight LVLMs together with GPT-4o(-mini) on MVL-SIB. Our results reveal that LVLMs struggle in cross-modal topic matching in lower-resource languages, performing no better than chance on languages like N'Koo. Our analysis further reveals that VL support in LVLMs declines disproportionately relative to textual support for lower-resource languages, as evidenced by comparison of cross-modal and text-only topical matching performance. We further observe that open-weight LVLMs do not benefit from representing a topic with more than one image, suggesting that these models are not yet fully effective at handling multi-image tasks. By correlating performance on MVL-SIB with other multilingual VL benchmarks, we highlight that MVL-SIB serves as a comprehensive probe of multilingual VL understanding in LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Multilingual vision-language benchmark
Low-resource language evaluation
Cross-modal topic matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual vision-language benchmark
Cross-modal topical matching
Open-weight LVLMs evaluation
🔎 Similar Papers
No similar papers found.