Spatially Grounded Explanations in Vision Language Models for Document Visual Question Answering

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak interpretability and poor result reproducibility in Document Visual Question Answering (DocVQA), this paper proposes EaGERS—a training-free, general-purpose visual-language reasoning framework. EaGERS partitions document images into configurable spatial grids, leverages multimodal embedding similarity matching to guide large language models in generating natural-language rationales, and employs localized masking constraints alongside majority voting to precisely ground each rationale to its corresponding image subregion. It is the first method to jointly optimize multimodal rationale generation and spatial grounding without fine-tuning. Evaluated on the DocVQA benchmark, EaGERS surpasses strong baselines in both answer accuracy and textual similarity metrics, while significantly enhancing result transparency, traceability, and cross-model reproducibility.

Technology Category

Application Category

📝 Abstract
We introduce EaGERS, a fully training-free and model-agnostic pipeline that (1) generates natural language rationales via a vision language model, (2) grounds these rationales to spatial sub-regions by computing multimodal embedding similarities over a configurable grid with majority voting, and (3) restricts the generation of responses only from the relevant regions selected in the masked image. Experiments on the DocVQA dataset demonstrate that our best configuration not only outperforms the base model on exact match accuracy and Average Normalized Levenshtein Similarity metrics but also enhances transparency and reproducibility in DocVQA without additional model fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Generates natural language rationales for document visual questions
Grounds rationales to spatial sub-regions using multimodal embeddings
Improves accuracy and transparency in DocVQA without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates natural language rationales via VLM
Grounds rationales to spatial sub-regions
Restricts responses from relevant masked regions
🔎 Similar Papers
No similar papers found.