Multimodal Evaluation of Russian-language Architectures

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The absence of Russian-language multimodal benchmarks hinders systematic evaluation of multimodal large language models (MLLMs) regarding their capabilities, limitations, and risks. Method: We introduce Mera Multi—the first open-source, Russian-centric multimodal evaluation framework—covering text, image, audio, and video modalities. It comprises 18 original tasks designed for both general-purpose and modality-specific architectures. We construct culturally adapted Russian multimodal datasets and propose a unified taxonomy for multimodal competencies. Adopting an instruction-based benchmarking paradigm, we standardize prompting templates and evaluation metrics, while integrating watermarking, licensing, and leakage-prevention mechanisms to ensure data security and cross-model (open- and closed-source) comparability. Contribution/Results: Mera Multi establishes a standardized, reproducible evaluation baseline for Russian multimodal research and provides a transferable benchmarking methodology applicable to other Slavic languages.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) are currently at the center of research attention, showing rapid progress in scale and capabilities, yet their intelligence, limitations, and risks remain insufficiently understood. To address these issues, particularly in the context of the Russian language, where no multimodal benchmarks currently exist, we introduce Mera Multi, an open multimodal evaluation framework for Russian-spoken architectures. The benchmark is instruction-based and encompasses default text, image, audio, and video modalities, comprising 18 newly constructed evaluation tasks for both general-purpose models and modality-specific architectures (image-to-text, video-to-text, and audio-to-text). Our contributions include: (i) a universal taxonomy of multimodal abilities; (ii) 18 datasets created entirely from scratch with attention to Russian cultural and linguistic specificity, unified prompts, and metrics; (iii) baseline results for both closed-source and open-source models; (iv) a methodology for preventing benchmark leakage, including watermarking and licenses for private sets. While our current focus is on Russian, the proposed benchmark provides a replicable methodology for constructing multimodal benchmarks in typologically diverse languages, particularly within the Slavic language family.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal Russian-language AI models lacking existing benchmarks
Assessing intelligence and limitations of multimodal language architectures systematically
Creating culturally-aware evaluation framework for Slavic language AI capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open multimodal evaluation framework for Russian
Instruction-based benchmark with four modalities
Methodology preventing leakage via watermarking and licensing
Artem Chervyakov
Artem Chervyakov
SberAI
Искусственный интеллект
U
Ulyana Isaeva
MERA Team
A
Anton A. Emelyanov
MERA Team
A
Artem Safin
MERA Team
M
Maria Tikhonova
MERA Team
Alexander Kharitonov
Alexander Kharitonov
Natural Language Processing Researcher
Natural Language ProcessingArtificial Intelligence
Y
Yulia Lyakh
MERA Team
P
Petr Surovtsev
MERA Team
D
Denis Shevelev Vildan Saburov
MERA Team
Vasily Konovalov
Vasily Konovalov
Unknown affiliation
Natural Language ProcessingMachine LearningDialogue Systems
E
Elisei Rykov
MERA Team
Ivan Sviridov
Ivan Sviridov
Unknown affiliation
Generative ModelsComputer VisionLarge Language Models
A
Amina Miftakhova
MERA Team
I
I. Alimova
MERA Team
Alexander Panchenko
Alexander Panchenko
Associate Professor for Natural Language Processing
natural language processingword sense disambiguationtext style transferargument mininggraph
A
A. Kapitanov
MERA Team
A
A. Fenogenova
MERA Team