VisText-Mosquito: A Multimodal Dataset and Benchmark for AI-Based Mosquito Breeding Site Detection and Reasoning

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the need for proactive surveillance and control of mosquito-borne diseases, this study proposes the first vision-language multimodal method for mosquito breeding site identification. Motivated by the limitations of conventional manual inspection—namely, low scalability and poor interpretability—we construct the first real-world multimodal dataset comprising synchronized images and descriptive textual annotations of mosquito breeding habitats, accompanied by a cross-modal alignment annotation framework. Our approach jointly models image-level detection (YOLOv9s), pixel-level water segmentation (YOLOv11n-Seg), and natural language inference (fine-tuned BLIP), enabling end-to-end “perceive–segment–understand” reasoning. Experiments demonstrate strong performance: detection AP = 0.929, segmentation mAP@50 = 0.798, and text generation scores of BLEU-4 = 54.7, ROUGE-L = 0.87, and BERTScore = 0.91. All code and data are publicly released to advance AI-driven, prevention-oriented public health paradigms.

Technology Category

Application Category

📝 Abstract
Mosquito-borne diseases pose a major global health risk, requiring early detection and proactive control of breeding sites to prevent outbreaks. In this paper, we present VisText-Mosquito, a multimodal dataset that integrates visual and textual data to support automated detection, segmentation, and reasoning for mosquito breeding site analysis. The dataset includes 1,828 annotated images for object detection, 142 images for water surface segmentation, and natural language reasoning texts linked to each image. The YOLOv9s model achieves the highest precision of 0.92926 and mAP@50 of 0.92891 for object detection, while YOLOv11n-Seg reaches a segmentation precision of 0.91587 and mAP@50 of 0.79795. For reasoning generation, our fine-tuned BLIP model achieves a final loss of 0.0028, with a BLEU score of 54.7, BERTScore of 0.91, and ROUGE-L of 0.87. This dataset and model framework emphasize the theme"Prevention is Better than Cure", showcasing how AI-based detection can proactively address mosquito-borne disease risks. The dataset and implementation code are publicly available at GitHub: https://github.com/adnanul-islam-jisun/VisText-Mosquito
Problem

Research questions and friction points this paper is trying to address.

Detect mosquito breeding sites using multimodal AI
Segment water surfaces for mosquito habitat analysis
Generate reasoning texts for breeding site detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset with visual and textual data
YOLOv9s model for high-precision object detection
Fine-tuned BLIP model for reasoning generation
🔎 Similar Papers
No similar papers found.
M
Md. Adnanul Islam
United International University, Bangladesh
Md. Faiyaz Abdullah Sayeedi
Md. Faiyaz Abdullah Sayeedi
Undergraduate Teaching Assistant, United International University
Computer VisionLarge Language ModelResponsible AIMultimodal Machine Learning
M
Md Asaduzzaman Shuvo
United International University, Bangladesh
M
Muhammad Ziaur Rahman
United International University, Bangladesh
S
S. R. Bappy
United International University, Bangladesh
R
Raiyan Rahman
University of Portsmouth, United Kingdom
Swakkhar Shatabda
Swakkhar Shatabda
Professor, School of Data and Sciences, BRAC University
optimizationmachine learningcomputational biologybioinformatics