Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Document images are information-dense, and query-based localization relying solely on local regions often hinders multimodal large language models (MLLMs) from focusing on critical visual content. To address this, we propose a query-driven “Chain-of-Boxes” mechanism—requiring no architectural modification—that autonomously identifies and progressively refines key regions in a coarse-to-fine manner, emulating human visual reasoning. Our contributions are threefold: (1) the first chain-style visual reasoning paradigm; (2) the largest-scale visual reasoning supervision dataset to date (249K samples); and (3) a dual-task collaborative training framework jointly optimizing region localization and box-query semantic alignment. Leveraging an automated annotation pipeline and a differentiable box-query alignment objective, our method achieves an average +4.2% accuracy gain across seven benchmarks and four major MLLM families. Code, data, and models are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have made significant progress in document understanding. However, the information-dense nature of document images still poses challenges, as most queries depend on only a few relevant regions, with the rest being redundant. Existing one-pass MLLMs process entire document images without considering query relevance, often failing to focus on critical regions and producing unfaithful responses. Inspired by the human coarse-to-fine reading pattern, we introduce Doc-CoB (Chain-of-Box), a simple-yet-effective mechanism that integrates human-style visual reasoning into MLLM without modifying its architecture. Our method allows the model to autonomously select the set of regions (boxes) most relevant to the query, and then focus attention on them for further understanding. We first design a fully automatic pipeline, integrating a commercial MLLM with a layout analyzer, to generate 249k training samples with intermediate visual reasoning supervision. Then we incorporate two enabling tasks that improve box identification and box-query reasoning, which together enhance document understanding. Extensive experiments on seven benchmarks with four popular models show that Doc-CoB significantly improves performance, demonstrating its effectiveness and wide applicability. All code, data, and models will be released publicly.
Problem

Research questions and friction points this paper is trying to address.

Enhancing document understanding by focusing on relevant visual regions
Addressing redundancy in document images for accurate query responses
Integrating human-style visual reasoning into multimodal language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Box mechanism for visual reasoning
Automatic pipeline with layout analyzer
Enabling tasks for box-query reasoning
🔎 Similar Papers
No similar papers found.
Ye Mo
Ye Mo
Zhejiang University
Zirui Shao
Zirui Shao
Zhejiang University
K
Kai Ye
Zhejiang University
X
Xianwei Mao
Zhejiang University
B
Bo Zhang
Shanghai AI Laboratory
Hangdi Xing
Hangdi Xing
Student, Zhejiang University
Document UnderstandingVision-Language Models
P
Peng Ye
The Chinese University of Hong Kong
G
Gang Huang
Alibaba Group
K
Kehan Chen
Alibaba Group
Z
Zhou Huan
Alibaba Group
Z
Zixu Yan
Alibaba Group
S
Sheng Zhou
Zhejiang University