DocFusion: A Unified Framework for Document Parsing Tasks

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing document parsing methods rely on multiple independent models, resulting in system complexity and high maintenance costs. This paper proposes the first lightweight generative unified framework (0.28B parameters) that jointly performs text detection and recognition within a single model. Our key contributions are: (1) a novel task-representation unification mechanism and a reciprocal collaborative training paradigm—first revealing and leveraging the substantial performance gain of recognition on detection; and (2) an enhanced joint optimization objective, multi-task representation fusion, and cross-task knowledge transfer strategy. Evaluated on four core document parsing tasks—text detection, text recognition, end-to-end spotting, and document layout analysis—the framework achieves state-of-the-art results while maintaining low computational overhead and high efficiency.

Technology Category

Application Category

📝 Abstract
Document parsing is essential for analyzing complex document structures and extracting fine-grained information, supporting numerous downstream applications. However, existing methods often require integrating multiple independent models to handle various parsing tasks, leading to high complexity and maintenance overhead. To address this, we propose DocFusion, a lightweight generative model with only 0.28B parameters. It unifies task representations and achieves collaborative training through an improved objective function. Experiments reveal and leverage the mutually beneficial interaction among recognition tasks, and integrating recognition data significantly enhances detection performance. The final results demonstrate that DocFusion achieves state-of-the-art (SOTA) performance across four key tasks.
Problem

Research questions and friction points this paper is trying to address.

Unifies document parsing tasks to reduce complexity and overhead
Proposes a lightweight generative model with 0.28B parameters
Achieves SOTA performance across four key document tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight generative model with 0.28B parameters
Unified task representations for collaborative training
Improved objective function enhancing detection performance
Mingxu Chai
Mingxu Chai
Fudan University
Z
Ziyu Shen
Fudan University
C
Chong Zhang
Fudan University
Y
Yue Zhang
Fudan University
X
Xiao Wang
Fudan University
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
J
Jihua Kang
Fudan University
Jiazheng Zhang
Jiazheng Zhang
Fudan University
Large Language ModelNatural Language ProcessingData Mining
Q
Qi Zhang
Fudan University