Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering

πŸ“… 2024-11-25
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limitation of multimodal large language models (MLLMs) in leveraging external knowledge due to knowledge ossification during training, this paper proposes a knowledge-augmented framework tailored for knowledge-intensive visual question answering (K-VQA). Methodologically, we introduce a novel learnable self-reflective token mechanism that enables the model to dynamically determine whether external knowledge retrieval is required and to predict the relevance of candidate knowledge. We further design a two-stage, dual-model training paradigm that seamlessly integrates a knowledge retrieval interface into the LLaVA architecture and performs end-to-end joint fine-tuning. Our approach achieves significant improvements over state-of-the-art methods across multiple K-VQA benchmarks, substantially enhancing performance on knowledge-intensive tasks while preserving native visual question answering capabilities. The code and trained models are publicly released.

Technology Category

Application Category

πŸ“ Abstract
Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data. They have recently garnered attention due to their capability to address complex tasks involving both modalities. However, their effectiveness is limited to the knowledge acquired during training, which restricts their practical utility. In this work, we introduce a novel method to enhance the adaptability of MLLMs by integrating external knowledge sources. Our proposed model, Reflective LLaVA (ReflectiVA), utilizes reflective tokens to dynamically determine the need for external knowledge and predict the relevance of information retrieved from an external database. Tokens are trained following a two-stage two-model training recipe. This ultimately enables the MLLM to manage external knowledge while preserving fluency and performance on tasks where external knowledge is not needed. Through our experiments, we demonstrate the efficacy of ReflectiVA for knowledge-based visual question answering, highlighting its superior performance compared to existing methods. Source code and trained models are publicly available at https://aimagelab.github.io/ReflectiVA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing MLLMs with external knowledge integration
Dynamic determination of external knowledge need
Improving knowledge-based visual question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates external knowledge sources dynamically
Uses reflective tokens for relevance prediction
Two-stage two-model training enhances adaptability