🤖 AI Summary
Document images are information-dense, and query-based localization relying solely on local regions often hinders multimodal large language models (MLLMs) from focusing on critical visual content. To address this, we propose a query-driven “Chain-of-Boxes” mechanism—requiring no architectural modification—that autonomously identifies and progressively refines key regions in a coarse-to-fine manner, emulating human visual reasoning. Our contributions are threefold: (1) the first chain-style visual reasoning paradigm; (2) the largest-scale visual reasoning supervision dataset to date (249K samples); and (3) a dual-task collaborative training framework jointly optimizing region localization and box-query semantic alignment. Leveraging an automated annotation pipeline and a differentiable box-query alignment objective, our method achieves an average +4.2% accuracy gain across seven benchmarks and four major MLLM families. Code, data, and models are fully open-sourced.
📝 Abstract
Multimodal large language models (MLLMs) have made significant progress in document understanding. However, the information-dense nature of document images still poses challenges, as most queries depend on only a few relevant regions, with the rest being redundant. Existing one-pass MLLMs process entire document images without considering query relevance, often failing to focus on critical regions and producing unfaithful responses. Inspired by the human coarse-to-fine reading pattern, we introduce Doc-CoB (Chain-of-Box), a simple-yet-effective mechanism that integrates human-style visual reasoning into MLLM without modifying its architecture. Our method allows the model to autonomously select the set of regions (boxes) most relevant to the query, and then focus attention on them for further understanding. We first design a fully automatic pipeline, integrating a commercial MLLM with a layout analyzer, to generate 249k training samples with intermediate visual reasoning supervision. Then we incorporate two enabling tasks that improve box identification and box-query reasoning, which together enhance document understanding. Extensive experiments on seven benchmarks with four popular models show that Doc-CoB significantly improves performance, demonstrating its effectiveness and wide applicability. All code, data, and models will be released publicly.