🤖 AI Summary
Existing document parsing methods rely on multiple independent models, resulting in system complexity and high maintenance costs. This paper proposes the first lightweight generative unified framework (0.28B parameters) that jointly performs text detection and recognition within a single model. Our key contributions are: (1) a novel task-representation unification mechanism and a reciprocal collaborative training paradigm—first revealing and leveraging the substantial performance gain of recognition on detection; and (2) an enhanced joint optimization objective, multi-task representation fusion, and cross-task knowledge transfer strategy. Evaluated on four core document parsing tasks—text detection, text recognition, end-to-end spotting, and document layout analysis—the framework achieves state-of-the-art results while maintaining low computational overhead and high efficiency.
📝 Abstract
Document parsing is essential for analyzing complex document structures and extracting fine-grained information, supporting numerous downstream applications. However, existing methods often require integrating multiple independent models to handle various parsing tasks, leading to high complexity and maintenance overhead. To address this, we propose DocFusion, a lightweight generative model with only 0.28B parameters. It unifies task representations and achieves collaborative training through an improved objective function. Experiments reveal and leverage the mutually beneficial interaction among recognition tasks, and integrating recognition data significantly enhances detection performance. The final results demonstrate that DocFusion achieves state-of-the-art (SOTA) performance across four key tasks.