🤖 AI Summary
This study addresses the lack of systematic evaluation on how PDF preprocessing frameworks impact downstream domain-specific question answering. For the first time, it directly links PDF conversion quality to RAG-based QA performance by constructing a benchmark using Portuguese administrative documents. The work systematically compares four open-source tools—Docling, MinerU, Marker, and DeepSeek OCR—across various text extraction, cleaning, chunking, and metadata strategies, and includes GraphRAG for comparative analysis. Results demonstrate that hierarchical chunking and metadata enrichment significantly outperform the choice of conversion tool itself in boosting QA accuracy. The optimal configuration (Docling with hierarchical chunking and image descriptions) achieves 94.1% accuracy—approaching human-annotated performance (97.1%)—whereas GraphRAG attains only 82%, underscoring the critical role of structured preprocessing.
📝 Abstract
Retrieval-Augmented Generation (RAG) systems depend critically on the quality of document preprocessing, yet no prior study has evaluated PDF processing frameworks by their impact on downstream question-answering accuracy. We address this gap through a systematic comparison of four open-source PDF-to-Markdown conversion frameworks, Docling, MinerU, Marker, and DeepSeek OCR, across 19 pipeline configurations for extracting text and other contents from PDFs, varying the conversion tool, cleaning transformations, splitting strategy, and metadata enrichment. Evaluation was performed using a manually curated 50-question benchmark over a corpus of 36 Portuguese administrative documents (1,706 pages, ~492K words), with LLM-as-judge scoring averaged over 10 runs. Two baselines bounded the results: naïve PDFLoader (86.9%) and manually curated Markdown (97.1%). Docling with hierarchical splitting and image descriptions achieved the highest automated accuracy (94.1%). Metadata enrichment and hierarchy-aware chunking contributed more to accuracy than the conversion framework choice alone. Font-based hierarchy rebuilding consistently outperformed LLM-based approaches. An exploratory GraphRAG implementation scored only 82%, underperforming basic RAG, suggesting that naïve knowledge graph construction without ontological guidance does not yet justify its added complexity. These findings demonstrate that data preparation quality is the dominant factor in RAG system performance.