A Lightweight and Explainable Vision-Language Framework for Crop Disease Visual Question Answering

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a lightweight and interpretable vision-language framework tailored for visual question answering in crop disease diagnosis, addressing challenges in both visual understanding and language generation accuracy as well as model interpretability. The approach integrates a Swin Transformer-based visual encoder with a sequence-to-sequence language decoder, enhanced by task-oriented visual pretraining and a two-stage training strategy to strengthen cross-modal alignment and visual representation learning. Evaluated on a large-scale crop disease dataset, the model achieves superior performance—outperforming existing large-scale baselines with significantly fewer parameters—in both crop and disease identification and natural language answer generation, as measured by BLEU, ROUGE, and BERTScore metrics. Its interpretability and robustness are further validated through Grad-CAM visualizations and token-level attribution analysis.

Technology Category

Application Category

📝 Abstract
Visual question answering for crop disease analysis requires accurate visual understanding and reliable language generation. This work presents a lightweight vision-language framework for crop and disease identification from leaf images. The proposed approach combines a Swin Transformer vision encoder with sequence-to-sequence language decoders. A two-stage training strategy is adopted to improve visual representation learning and cross-modal alignment. The model is evaluated on a large-scale crop disease dataset using classification and natural language generation metrics. Experimental results show high accuracy for both crop and disease identification. The framework also achieves strong performance on BLEU, ROUGE and BERTScore. Our proposed models outperform large-scale vision-language baselines while using significantly fewer parameters. Explainability is assessed using Grad-CAM and token-level attribution. Qualitative results demonstrate robust performance under diverse user-driven queries. These findings highlight the effectiveness of task-specific visual pretraining for crop disease visual question answering.
Problem

Research questions and friction points this paper is trying to address.

crop disease
visual question answering
vision-language framework
explainability
leaf image
Innovation

Methods, ideas, or system contributions that make the work stand out.

lightweight vision-language model
two-stage training
cross-modal alignment
explainable AI
crop disease VQA
🔎 Similar Papers
No similar papers found.
Md. Zahid Hossain
Md. Zahid Hossain
Lecturer, Ahsanullah University of Science and Technology
Computer VisionDeep LearningVision Language Models
M
Most. Sharmin Sultana Samu
Department of Computer Science and Engineering, BRAC University, Dhaka, 1212, Bangladesh.
M
Md. Rakibul Islam
Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, 1208, Bangladesh.
M
Md. Siam Ansary
Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, 1208, Bangladesh.