🤖 AI Summary
Existing agricultural visual question answering (VQA) methods rely on single-image inputs and static pipelines, limiting their ability to jointly reason across multi-scale, multi-temporal imagery and perform robust inference under external knowledge scarcity.
Method: This paper proposes a self-reflective multi-agent framework tailored for real-world agricultural scenarios. It establishes a dynamic four-role architecture—retrieval, reflection, answer generation, and refinement—enabling cross-image spatial-temporal alignment, real-time retrieval of external agricultural knowledge, and context-aware fusion. Parallel reasoning and iterative answer refinement overcome traditional bottlenecks of evidence limitation and pipeline rigidity.
Contribution/Results: Evaluated on the AgMMU benchmark, our framework achieves significant improvements in accuracy and robustness. It delivers a scalable, verifiable, and systematic solution for complex agricultural VQA, advancing beyond monolithic and inflexible architectures.
📝 Abstract
Agricultural visual question answering is essential for providing farmers and researchers with accurate and timely knowledge. However, many existing approaches are predominantly developed for evidence-constrained settings such as text-only queries or single-image cases. This design prevents them from coping with real-world agricultural scenarios that often require multi-image inputs with complementary views across spatial scales, and growth stages. Moreover, limited access to up-to-date external agricultural context makes these systems struggle to adapt when evidence is incomplete. In addition, rigid pipelines often lack systematic quality control. To address this gap, we propose a self-reflective and self-improving multi-agent framework that integrates four roles, the Retriever, the Reflector, the Answerer, and the Improver. They collaborate to enable context enrichment, reflective reasoning, answer drafting, and iterative improvement.
A Retriever formulates queries and gathers external information, while a Reflector assesses adequacy and triggers sequential reformulation and renewed retrieval. Two Answerers draft candidate responses in parallel to reduce bias. The Improver refines them through iterative checks while ensuring that information from multiple images is effectively aligned and utilized. Experiments on the AgMMU benchmark show that our framework achieves competitive performance on multi-image agricultural QA.