🤖 AI Summary
To address the inefficiency of few-shot example retrieval and the limited generalization capability of multimodal large language models (MLLMs) in scientific visual question answering (SciVQA), this paper proposes an adaptive ensemble framework. First, it dynamically retrieves the most semantically relevant few-shot examples based on question semantics and image modality, while simultaneously selecting the optimal MLLM and prompt template. Second, it introduces a confidence-weighted fusion strategy to collaboratively refine answers across multiple models. For fine-grained evaluation, the method integrates ROUGE and BERTScore metrics. On the SciVQA 2025 blind test set, it achieves a mean F1 score of 85.12, ranking third and significantly outperforming baseline models. The implementation is publicly available, establishing a reproducible and extensible ensemble paradigm for few-shot scientific VQA.
📝 Abstract
This paper describes our system for the SciVQA 2025 Shared Task on Scientific Visual Question Answering. Our system employs an ensemble of two Multimodal Large Language Models and various few-shot example retrieval strategies. The model and few-shot setting are selected based on the figure and question type. We also select answers based on the models' confidence levels. On the blind test data, our system ranks third out of seven with an average F1 score of 85.12 across ROUGE-1, ROUGE-L, and BERTS. Our code is publicly available.