🤖 AI Summary
Existing Bengali Visual Question Answering (VQA) datasets suffer from limited human annotation, monolithic answer formats, poor translation quality, and the absence of high-quality open benchmarks—severely hindering low-resource multimodal research. To address these limitations, we introduce BanglaVQA, the first high-quality, open-source Bengali VQA benchmark, comprising 4,750 images and 52,650 question-answer pairs spanning nominal, quantitative, and yes/no question types. Our methodology employs a multilingual large language model–assisted translation and refinement pipeline, rigorously validated by human experts to ensure semantic fidelity and linguistic naturalness. We further propose a fine-grained answer-type taxonomy to support diverse evaluation protocols. BanglaVQA fills a critical gap in low-resource language VQA benchmarks and establishes a robust foundation for inclusive, multilingual multimodal AI research.
📝 Abstract
In this paper, we introduce Bangla-Bayanno, an open-ended Visual Question Answering (VQA) Dataset in Bangla, a widely used, low-resource language in multimodal AI research. The majority of existing datasets are either manually annotated with an emphasis on a specific domain, query type, or answer type or are constrained by niche answer formats. In order to mitigate human-induced errors and guarantee lucidity, we implemented a multilingual LLM-assisted translation refinement pipeline. This dataset overcomes the issues of low-quality translations from multilingual sources. The dataset comprises 52,650 question-answer pairs across 4750+ images. Questions are classified into three distinct answer types: nominal (short descriptive), quantitative (numeric), and polar (yes/no). Bangla-Bayanno provides the most comprehensive open-source, high-quality VQA benchmark in Bangla, aiming to advance research in low-resource multimodal learning and facilitate the development of more inclusive AI systems.