🤖 AI Summary
This work addresses Vietnamese infographic visual question answering (ViInfographicVQA), introducing the first large-scale, real-world benchmark comprising 6,747 infographics and 20,409 human-verified QA pairs—uniquely supporting both single-infographic understanding and multi-infographic cross-image reasoning. Methodologically, it integrates OCR, document layout analysis, numerical reasoning, and cross-image semantic alignment, under a rigorously designed dual-task evaluation protocol. Key contributions are: (1) the first publicly available Vietnamese infographic VQA benchmark; (2) formal definition and implementation of multi-infographic collaborative reasoning; and (3) comprehensive evaluation of state-of-the-art multimodal models, revealing critical deficiencies in layout-aware comprehension and cross-infographic integration for low-resource languages—average accuracy drops by 32.7% on multi-infographic tasks versus single-infographic ones, confirming non-modular reasoning and cross-image semantic integration as fundamental bottlenecks.
📝 Abstract
Infographic Visual Question Answering (InfographicVQA) evaluates a model's ability to read and reason over data-rich, layout-heavy visuals that combine text, charts, icons, and design elements. Compared with scene-text or natural-image VQA, infographics require stronger integration of OCR, layout understanding, and numerical and semantic reasoning. We introduce ViInfographicVQA, the first benchmark for Vietnamese InfographicVQA, comprising over 6747 real-world infographics and 20409 human-verified question-answer pairs across economics, healthcare, education, and more. The benchmark includes two evaluation settings. The Single-image task follows the traditional setup in which each question is answered using a single infographic. The Multi-image task requires synthesizing evidence across multiple semantically related infographics and is, to our knowledge, the first Vietnamese evaluation of cross-image reasoning in VQA. We evaluate a range of recent vision-language models on this benchmark, revealing substantial performance disparities, with the most significant errors occurring on Multi-image questions that involve cross-image integration and non-span reasoning. ViInfographicVQA contributes benchmark results for Vietnamese InfographicVQA and sheds light on the limitations of current multimodal models in low-resource contexts, encouraging future exploration of layout-aware and cross-image reasoning methods.