🤖 AI Summary
To address the challenges of high noise levels and low OCR transcription accuracy in historical document images, this paper proposes a robust text extraction framework tailored for multimodal large language models (MLLMs). Methodologically, it integrates test-time multi-variant image augmentation—including padding, Gaussian blur, and grid distortion—with an enhanced Needleman–Wunsch sequence alignment algorithm to achieve consensus fusion across multiple Gemini 2.0 Flash transcriptions, yielding a final output with confidence scores. Key contributions are: (i) the first integration of dynamic image augmentation with interpretable, sequence-level alignment for MLLM-based historical document transcription; and (ii) a lightweight, end-to-end deployable ensemble decoding paradigm. Evaluated on a newly curated dataset of 622 Pennsylvania death records, the framework improves character-level accuracy by 4 percentage points over single-run transcription baselines, significantly enhancing both model robustness and result interpretability.
📝 Abstract
We present a novel ensemble framework that stabilizes LLM based text extraction from noisy historical documents. We transcribe multiple augmented variants of each image with Gemini 2.0 Flash and fuse these outputs with a custom Needleman Wunsch style aligner that yields both a consensus transcription and a confidence score. We present a new dataset of 622 Pennsylvania death records, and demonstrate our method improves transcription accuracy by 4 percentage points relative to a single shot baseline. We find that padding and blurring are the most useful for improving accuracy, while grid warp perturbations are best for separating high and low confidence cases. The approach is simple, scalable, and immediately deployable to other document collections and transcription models.