🤖 AI Summary
Multimodal multi-hop question answering faces challenges including error propagation across intermediate reasoning steps and high training costs. This paper proposes a training-free, adaptive planning graph framework that dynamically constructs multi-path reasoning structures, departing from conventional single-chain sequential retrieval and reasoning paradigms. The method employs a tripartite “planning–retrieval–reasoning” architecture, integrating modality-specific retrieval strategies to enable flexible fusion of heterogeneous visual and textual information and adaptive expansion of multi-hop reasoning paths. Evaluated on MultimodalQA and WebQA, our approach matches or surpasses supervised baselines in accuracy while demonstrating superior robustness, generalization capability, and computational efficiency. Its core contribution lies in introducing the first training-free, multi-path, and adaptive multimodal reasoning planning mechanism—eliminating reliance on labeled supervision, supporting diverse reasoning trajectories, and enabling dynamic path selection based on query and context semantics.
📝 Abstract
Multimodal Multi-hop question answering requires integrating information from diverse sources, such as images and texts, to derive answers. Existing methods typically rely on sequential retrieval and reasoning, where each step builds on the previous output. However, this single-path paradigm makes them vulnerable to errors due to misleading intermediate steps. Moreover, developing multimodal models can be computationally expensive, often requiring extensive training. To address these limitations, we propose a training-free framework guided by an Adaptive Planning Graph, which consists of planning, retrieval and reasoning modules. The planning module analyzes the current state of the Adaptive Planning Graph, determines the next action and where to expand the graph, which enables dynamic and flexible exploration of reasoning paths. To handle retrieval of text to unspecified target modalities, we devise modality-specific strategies that dynamically adapt to distinct data types. Our approach preserves the characteristics of multimodal information without costly task-specific training, enabling seamless integration with up-to-date models. Finally, the experiments on MultimodalQA and WebQA show that our approach matches or outperforms existing models that rely on training.