RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering

πŸ“… 2025-01-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the incompatibility between discriminative ranking models and generative large language models (LLMs) in multimodal retrieval-augmented question answering (MRAQA), this paper proposes a unified generative re-ranking framework. Methodologically, we introduce the first generative multi-task re-ranking mechanism that jointly performs autoregressive document ID re-ranking and answer generation. We further pioneer the synergistic integration of LLaVA’s pointwise ranking capability with instruction-tuned LLaMA’s sequential modeling strength, unifying multimodal encoding, instruction tuning, permutation augmentation, and joint pointwise + autoregressive learning. Our approach achieves significant improvements over strong baselines on WebQA and MultiModalQA, delivering end-to-end gains in both question answering accuracy and document ranking quality. The code and datasets are publicly released.

Technology Category

Application Category

πŸ“ Abstract
Multi-modal retrieval-augmented Question Answering (MRAQA), integrating text and images, has gained significant attention in information retrieval (IR) and natural language processing (NLP). Traditional ranking methods rely on small encoder-based language models, which are incompatible with modern decoder-based generative large language models (LLMs) that have advanced various NLP tasks. To bridge this gap, we propose RAMQA, a unified framework combining learning-to-rank methods with generative permutation-enhanced ranking techniques. We first train a pointwise multi-modal ranker using LLaVA as the backbone. Then, we apply instruction tuning to train a LLaMA model for re-ranking the top-k documents using an innovative autoregressive multi-task learning approach. Our generative ranking model generates re-ranked document IDs and specific answers from document candidates in various permutations. Experiments on two MRAQA benchmarks, WebQA and MultiModalQA, show significant improvements over strong baselines, highlighting the effectiveness of our approach. Code and data are available at: https://github.com/TonyBY/RAMQA
Problem

Research questions and friction points this paper is trying to address.

Large Model Integration
Multimodal QA
Performance Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAMQA
Multimodal Ranking
Large Language Model Optimization
πŸ”Ž Similar Papers
No similar papers found.
Y
Yang Bai
University of Florida / Gainesville, Florida, USA
C
Christan Earl Grant
University of Florida / Gainesville, Florida, USA
Daisy Zhe Wang
Daisy Zhe Wang
University of Florida
DatabasesIn-Database Machine LearningProbabilistic Database SystemsProbabilistic Knowledge BasesProbabilistic Logic