🤖 AI Summary
This work addresses the lack of evaluation benchmarks for image-text reading comprehension in Vietnamese Visual Question Answering (VQA). We introduce ViTextVQA, the first large-scale Vietnamese VQA dataset explicitly designed for image-text understanding, comprising over 16,000 images and 50,000 OCR-annotated question-answer pairs. We formally define and systematically evaluate image-text comprehension capability in Vietnamese scenes, revealing that OCR token ordering critically affects answer generation. To address this, we propose a dedicated VQA framework integrating OCR sequence modeling (BERT/LSTM), multimodal feature extraction (ViT/CLIP), and cross-modal attention mechanisms. Experiments demonstrate substantial accuracy improvements over mainstream models on Vietnamese image-text understanding tasks. The ViTextVQA dataset is publicly released, establishing a foundational resource for multimodal understanding research in low-resource languages.
📝 Abstract
Visual Question Answering (VQA) is a complicated task that requires the capability of simultaneously processing natural language and images. Initially, this task was researched, focusing on methods to help machines understand objects and scene contexts in images. However, some text appearing in the image that carries explicit information about the full content of the image is not mentioned. Along with the continuous development of the AI era, there have been many studies on the reading comprehension ability of VQA models in the world. As a developing country, conditions are still limited, and this task is still open in Vietnam. Therefore, we introduce the first large-scale dataset in Vietnamese specializing in the ability to understand text appearing in images, we call it ViTextVQA ( extbf{Vi}etnamese extbf{Text}-based extbf{V}isual extbf{Q}uestion extbf{A}nswering dataset) which contains extbf{over 16,000} images and extbf{over 50,000} questions with answers. Through meticulous experiments with various state-of-the-art models, we uncover the significance of the order in which tokens in OCR text are processed and selected to formulate answers. This finding helped us significantly improve the performance of the baseline models on the ViTextVQA dataset. Our dataset is available at this href{https://github.com/minhquan6203/ViTextVQA-Dataset}{link} for research purposes.