🤖 AI Summary
To address the challenges of implicit reasoning, weak training strategies, and insufficient cross-modal ranking accuracy in multimodal document retrieval, this paper proposes a two-stage reasoning-enhanced training paradigm. In the first stage, instruction-guided supervised fine-tuning constructs high-quality, structured reasoning chains. In the second stage, a Proximal Policy Optimization (PPO) framework is employed with a composite reward comprising multimodal re-ranking reward and templated reasoning quality reward. The method integrates a multimodal encoder with fine-grained reward modeling to significantly improve both reasoning interpretability and ranking performance. Evaluated on the MMDocIR benchmark, it achieves state-of-the-art results, outperforming the best baseline by over 4% in Recall@1. Notably, even small-scale models trained with this paradigm match or exceed the performance of larger counterparts, demonstrating the effectiveness and generalizability of the reasoning-driven approach.
📝 Abstract
Multimodal document retrieval systems enable information access across text, images, and layouts, benefiting various domains like document-based question answering, report analysis, and interactive content summarization. Rerankers improve retrieval precision by reordering retrieved candidates. However, current multimodal reranking methods remain underexplored, with significant room for improvement in both training strategies and overall effectiveness. Moreover, the lack of explicit reasoning makes it difficult to analyze and optimize these methods further. In this paper, We propose MM-R5, a MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval, aiming to provide a more effective and reliable solution for multimodal reranking tasks. MM-R5 is trained in two stages: supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we focus on improving instruction-following and guiding the model to generate complete and high-quality reasoning chains. To support this, we introduce a novel data construction strategy that produces rich, high-quality reasoning data. In the RL stage, we design a task-specific reward framework, including a reranking reward tailored for multimodal candidates and a composite template-based reward to further refine reasoning quality. We conduct extensive experiments on MMDocIR, a challenging public benchmark spanning multiple domains. MM-R5 achieves state-of-the-art performance on most metrics and delivers comparable results to much larger models on the remaining ones. Moreover, compared to the best retrieval-only method, MM-R5 improves recall@1 by over 4%. These results validate the effectiveness of our reasoning-enhanced training pipeline.