LIGHT: Multi-Modal Text Linking on Historical Maps

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Historical maps exhibit highly variable text orientations and irregular layouts, making it challenging for existing methods to accurately link fragmented textual entities (e.g., multi-word place names); this limitation stems primarily from neglecting geometric layout cues while over-relying on linguistic features. To address this, we propose a geometry-aware multimodal text linking method built upon LayoutLMv3. Our approach introduces a geometry-aware embedding module that explicitly encodes polygonal text coordinates and spatial relationships, and incorporates a bidirectional reading-order modeling mechanism to deeply fuse visual, linguistic, and geometric features. To our knowledge, this is the first work to achieve effective synergy between spatial layout and semantic information for historical map text linking. Evaluated on the ICDAR 2024/2025 MapText competition datasets, our method significantly outperforms prior state-of-the-art approaches, demonstrating both the efficacy and advancement of integrating geometric priors into multimodal modeling for structured understanding of historical map text.

Technology Category

Application Category

📝 Abstract
Text on historical maps provides valuable information for studies in history, economics, geography, and other related fields. Unlike structured or semi-structured documents, text on maps varies significantly in orientation, reading order, shape, and placement. Many modern methods can detect and transcribe text regions, but they struggle to effectively ``link'' the recognized text fragments, e.g., determining a multi-word place name. Existing layout analysis methods model word relationships to improve text understanding in structured documents, but they primarily rely on linguistic features and neglect geometric information, which is essential for handling map text. To address these challenges, we propose LIGHT, a novel multi-modal approach that integrates linguistic, image, and geometric features for linking text on historical maps. In particular, LIGHT includes a geometry-aware embedding module that encodes the polygonal coordinates of text regions to capture polygon shapes and their relative spatial positions on an image. LIGHT unifies this geometric information with the visual and linguistic token embeddings from LayoutLMv3, a pretrained layout analysis model. LIGHT uses the cross-modal information to predict the reading-order successor of each text instance directly with a bi-directional learning strategy that enhances sequence robustness. Experimental results show that LIGHT outperforms existing methods on the ICDAR 2024/2025 MapText Competition data, demonstrating the effectiveness of multi-modal learning for historical map text linking.
Problem

Research questions and friction points this paper is trying to address.

Linking fragmented text on historical maps
Integrating geometric features with linguistic data
Improving reading-order prediction for map text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal integration of linguistic, image, geometric features
Geometry-aware embedding for polygonal text regions
Bi-directional learning for reading-order prediction
🔎 Similar Papers
No similar papers found.
Yijun Lin
Yijun Lin
University of Minnesota, Twin Cities
Spatiotemporal PredictionMachine Learning
R
Rhett Olson
University of Minnesota, Minneapolis, Minnesota, United States
J
Junhan Wu
University of Minnesota, Minneapolis, Minnesota, United States
Yao-Yi Chiang
Yao-Yi Chiang
Associate Professor, Computer Science & Engineering, University of Minnesota
spatial AIdata miningmachine learninggeographic information sciencecomputer vision
J
Jerod Weinman
Grinnell College, Grinnell, Iowa, United States