🤖 AI Summary
This work addresses the inefficiency of autoregressive decoding in vision-language models for document parsing by introducing, for the first time, a parallel token prediction mechanism. The authors propose a plug-and-play, model-agnostic approach that inserts learnable tokens into the input sequence and employs a tailored training objective to enable parallel multi-token generation. To support effective training, they also construct a high-quality, large-scale data generation pipeline for document parsing. Evaluated on OmniDocBench and olmOCR-bench, the method achieves 1.6×–2.2× decoding speedup, substantially improves sample efficiency, effectively mitigates hallucination, and demonstrates strong generalization across diverse document parsing tasks.
📝 Abstract
Document parsing, as a fundamental yet crucial vision task, is being revolutionized by vision-language models (VLMs). However, the autoregressive (AR) decoding inherent to VLMs creates a significant bottleneck, severely limiting parsing speed. In this paper, we propose Parallel-Token Prediction (PTP), a plugable, model-agnostic and simple-yet-effective method that enables VLMs to generate multiple future tokens in parallel with improved sample efficiency. Specifically, we insert some learnable tokens into the input sequence and design corresponding training objectives to equip the model with parallel decoding capabilities for document parsing. Furthermore, to support effective training, we develop a comprehensive data generation pipeline that efficiently produces large-scale, high-quality document parsing training data for VLMs. Extensive experiments on OmniDocBench and olmOCR-bench demonstrate that our method not only significantly improves decoding speed (1.6x-2.2x) but also reduces model hallucinations and exhibits strong generalization abilities.