MMAPG: A Training-Free Framework for Multimodal Multi-hop Question Answering via Adaptive Planning Graphs

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal multi-hop question answering faces challenges including error propagation across intermediate reasoning steps and high training costs. This paper proposes a training-free, adaptive planning graph framework that dynamically constructs multi-path reasoning structures, departing from conventional single-chain sequential retrieval and reasoning paradigms. The method employs a tripartite “planning–retrieval–reasoning” architecture, integrating modality-specific retrieval strategies to enable flexible fusion of heterogeneous visual and textual information and adaptive expansion of multi-hop reasoning paths. Evaluated on MultimodalQA and WebQA, our approach matches or surpasses supervised baselines in accuracy while demonstrating superior robustness, generalization capability, and computational efficiency. Its core contribution lies in introducing the first training-free, multi-path, and adaptive multimodal reasoning planning mechanism—eliminating reliance on labeled supervision, supporting diverse reasoning trajectories, and enabling dynamic path selection based on query and context semantics.

Technology Category

Application Category

📝 Abstract
Multimodal Multi-hop question answering requires integrating information from diverse sources, such as images and texts, to derive answers. Existing methods typically rely on sequential retrieval and reasoning, where each step builds on the previous output. However, this single-path paradigm makes them vulnerable to errors due to misleading intermediate steps. Moreover, developing multimodal models can be computationally expensive, often requiring extensive training. To address these limitations, we propose a training-free framework guided by an Adaptive Planning Graph, which consists of planning, retrieval and reasoning modules. The planning module analyzes the current state of the Adaptive Planning Graph, determines the next action and where to expand the graph, which enables dynamic and flexible exploration of reasoning paths. To handle retrieval of text to unspecified target modalities, we devise modality-specific strategies that dynamically adapt to distinct data types. Our approach preserves the characteristics of multimodal information without costly task-specific training, enabling seamless integration with up-to-date models. Finally, the experiments on MultimodalQA and WebQA show that our approach matches or outperforms existing models that rely on training.
Problem

Research questions and friction points this paper is trying to address.

Addresses error vulnerability in sequential multimodal reasoning paths
Eliminates need for expensive task-specific multimodal model training
Enables dynamic cross-modal retrieval without predefined target modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework with adaptive planning graphs
Modality-specific strategies for dynamic retrieval
Dynamic exploration of reasoning paths without training
Yiheng Hu
Yiheng Hu
University of New South Wales, Sydney, Australia
X
Xiaoyang Wang
University of New South Wales, Sydney, Australia
Q
Qing Liu
CSIRO Data61, Australia
X
Xiwei Xu
CSIRO Data61, Australia
Qian Fu
Qian Fu
Research Scientist, CSIRO's Data61
Computer VisionComputer Graphics
W
Wenjie Zhang
University of New South Wales, Sydney, Australia
Liming Zhu
Liming Zhu
Research Director at CSIRO’s Data61 & Prof at University of New South Wales
Software ArchitectureSE4AIResponsible AIAI SafetyBlockchain