HAUR: Human Annotation Understanding and Recognition Through Text-Heavy Images

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual question answering (VQA) models struggle to interpret unstructured human annotations in images—such as handwritten notes, circled regions, and arrows. To address this gap, we introduce HAUR (Human Annotation Understanding for VQA), a novel VQA task, and present HAUR-5, the first fine-grained benchmark dataset containing five realistic annotation types. To tackle challenges including multimodal layout irregularity, implicit semantics, and heterogeneous text–image integration, we propose OCR-Mix: a model that jointly leverages OCR-derived text, text-guided visual feature enhancement, and cross-modal attention to explicitly encode the spatial–semantic structure of annotations. Experiments demonstrate that OCR-Mix significantly outperforms state-of-the-art VQA and document understanding models on HAUR-5. This work establishes the first systematic evaluation framework for fine-grained semantic understanding of human-generated annotations, advancing VQA toward real-world interactive scenarios with a new paradigm and benchmark.

Technology Category

Application Category

📝 Abstract
Vision Question Answering (VQA) tasks use images to convey critical information to answer text-based questions, which is one of the most common forms of question answering in real-world scenarios. Numerous vision-text models exist today and have performed well on certain VQA tasks. However, these models exhibit significant limitations in understanding human annotations on text-heavy images. To address this, we propose the Human Annotation Understanding and Recognition (HAUR) task. As part of this effort, we introduce the Human Annotation Understanding and Recognition-5 (HAUR-5) dataset, which encompasses five common types of human annotations. Additionally, we developed and trained our model, OCR-Mix. Through comprehensive cross-model comparisons, our results demonstrate that OCR-Mix outperforms other models in this task. Our dataset and model will be released soon .
Problem

Research questions and friction points this paper is trying to address.

Visual Question Answering
Human Text Annotations
Model Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

HAUR Task
OCR-Mix Model
Text Annotation Understanding
🔎 Similar Papers
No similar papers found.
Y
Yuchen Yang
Xiamen University
H
Haoran Yan
Xiamen University
Yanhao Chen
Yanhao Chen
Xiamen University
Q
Qingqiang Wu
Xiamen University
Qingqi Hong
Qingqi Hong
Associate Professor, Xiamen University
Medical Image AnalysisDeep Learning