MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of implicit reasoning, weak training strategies, and insufficient cross-modal ranking accuracy in multimodal document retrieval, this paper proposes a two-stage reasoning-enhanced training paradigm. In the first stage, instruction-guided supervised fine-tuning constructs high-quality, structured reasoning chains. In the second stage, a Proximal Policy Optimization (PPO) framework is employed with a composite reward comprising multimodal re-ranking reward and templated reasoning quality reward. The method integrates a multimodal encoder with fine-grained reward modeling to significantly improve both reasoning interpretability and ranking performance. Evaluated on the MMDocIR benchmark, it achieves state-of-the-art results, outperforming the best baseline by over 4% in Recall@1. Notably, even small-scale models trained with this paradigm match or exceed the performance of larger counterparts, demonstrating the effectiveness and generalizability of the reasoning-driven approach.

Technology Category

Application Category

📝 Abstract
Multimodal document retrieval systems enable information access across text, images, and layouts, benefiting various domains like document-based question answering, report analysis, and interactive content summarization. Rerankers improve retrieval precision by reordering retrieved candidates. However, current multimodal reranking methods remain underexplored, with significant room for improvement in both training strategies and overall effectiveness. Moreover, the lack of explicit reasoning makes it difficult to analyze and optimize these methods further. In this paper, We propose MM-R5, a MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval, aiming to provide a more effective and reliable solution for multimodal reranking tasks. MM-R5 is trained in two stages: supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we focus on improving instruction-following and guiding the model to generate complete and high-quality reasoning chains. To support this, we introduce a novel data construction strategy that produces rich, high-quality reasoning data. In the RL stage, we design a task-specific reward framework, including a reranking reward tailored for multimodal candidates and a composite template-based reward to further refine reasoning quality. We conduct extensive experiments on MMDocIR, a challenging public benchmark spanning multiple domains. MM-R5 achieves state-of-the-art performance on most metrics and delivers comparable results to much larger models on the remaining ones. Moreover, compared to the best retrieval-only method, MM-R5 improves recall@1 by over 4%. These results validate the effectiveness of our reasoning-enhanced training pipeline.
Problem

Research questions and friction points this paper is trying to address.

Improving multimodal reranking effectiveness and reliability
Addressing lack of explicit reasoning in reranking methods
Enhancing training strategies for multimodal document retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training: SFT and RL
Novel data construction for reasoning
Task-specific reward framework design
🔎 Similar Papers
No similar papers found.
M
Mingjun Xu
DP Technology, Beijing, China
J
Jinhan Dong
DP Technology, Beijing, China
Jue Hou
Jue Hou
University of Minnesota Twin Cities
Statistics
Z
Zehui Wang
DP Technology, Beijing, China
S
Sihang Li
DP Technology, Beijing, China
Zhifeng Gao
Zhifeng Gao
DP Technology
Data MiningMachine LearningAI for ScienceAI for Industry
R
Renxin Zhong
School of Intelligent Systems Engineering, Sun Yat-Sen University, Shenzhen, China
Hengxing Cai
Hengxing Cai
Sun Yat-sen University
LLMVLMVLNUAV