ViInfographicVQA: A Benchmark for Single and Multi-image Visual Question Answering on Vietnamese Infographics

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses Vietnamese infographic visual question answering (ViInfographicVQA), introducing the first large-scale, real-world benchmark comprising 6,747 infographics and 20,409 human-verified QA pairs—uniquely supporting both single-infographic understanding and multi-infographic cross-image reasoning. Methodologically, it integrates OCR, document layout analysis, numerical reasoning, and cross-image semantic alignment, under a rigorously designed dual-task evaluation protocol. Key contributions are: (1) the first publicly available Vietnamese infographic VQA benchmark; (2) formal definition and implementation of multi-infographic collaborative reasoning; and (3) comprehensive evaluation of state-of-the-art multimodal models, revealing critical deficiencies in layout-aware comprehension and cross-infographic integration for low-resource languages—average accuracy drops by 32.7% on multi-infographic tasks versus single-infographic ones, confirming non-modular reasoning and cross-image semantic integration as fundamental bottlenecks.

Technology Category

Application Category

📝 Abstract
Infographic Visual Question Answering (InfographicVQA) evaluates a model's ability to read and reason over data-rich, layout-heavy visuals that combine text, charts, icons, and design elements. Compared with scene-text or natural-image VQA, infographics require stronger integration of OCR, layout understanding, and numerical and semantic reasoning. We introduce ViInfographicVQA, the first benchmark for Vietnamese InfographicVQA, comprising over 6747 real-world infographics and 20409 human-verified question-answer pairs across economics, healthcare, education, and more. The benchmark includes two evaluation settings. The Single-image task follows the traditional setup in which each question is answered using a single infographic. The Multi-image task requires synthesizing evidence across multiple semantically related infographics and is, to our knowledge, the first Vietnamese evaluation of cross-image reasoning in VQA. We evaluate a range of recent vision-language models on this benchmark, revealing substantial performance disparities, with the most significant errors occurring on Multi-image questions that involve cross-image integration and non-span reasoning. ViInfographicVQA contributes benchmark results for Vietnamese InfographicVQA and sheds light on the limitations of current multimodal models in low-resource contexts, encouraging future exploration of layout-aware and cross-image reasoning methods.
Problem

Research questions and friction points this paper is trying to address.

Develops a Vietnamese benchmark for infographic visual question answering
Evaluates models on single and multi-image reasoning tasks
Assesses multimodal integration for low-resource language contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces first Vietnamese infographic VQA benchmark
Includes multi-image cross-document reasoning tasks
Evaluates layout-aware multimodal models on low-resource language
🔎 Similar Papers
No similar papers found.
T
Tue-Thu Van-Dinh
AI VIETNAM Lab, Vietnam
H
Hoang-Duy Tran
AI VIETNAM Lab, Vietnam
Truong-Binh Duong
Truong-Binh Duong
Student, University of Science - VNUHCM
Machine LearningDeep Learning
M
Mai-Hanh Pham
AI VIETNAM Lab, Vietnam
B
Binh-Nam Le-Nguyen
AI VIETNAM Lab, Vietnam
Q
Quoc-Thai Nguyen
AI VIETNAM Lab, Vietnam, National Economics University, Vietnam