🤖 AI Summary
To address weak interpretability and poor result reproducibility in Document Visual Question Answering (DocVQA), this paper proposes EaGERS—a training-free, general-purpose visual-language reasoning framework. EaGERS partitions document images into configurable spatial grids, leverages multimodal embedding similarity matching to guide large language models in generating natural-language rationales, and employs localized masking constraints alongside majority voting to precisely ground each rationale to its corresponding image subregion. It is the first method to jointly optimize multimodal rationale generation and spatial grounding without fine-tuning. Evaluated on the DocVQA benchmark, EaGERS surpasses strong baselines in both answer accuracy and textual similarity metrics, while significantly enhancing result transparency, traceability, and cross-model reproducibility.
📝 Abstract
We introduce EaGERS, a fully training-free and model-agnostic pipeline that (1) generates natural language rationales via a vision language model, (2) grounds these rationales to spatial sub-regions by computing multimodal embedding similarities over a configurable grid with majority voting, and (3) restricts the generation of responses only from the relevant regions selected in the masked image. Experiments on the DocVQA dataset demonstrate that our best configuration not only outperforms the base model on exact match accuracy and Average Normalized Levenshtein Similarity metrics but also enhances transparency and reproducibility in DocVQA without additional model fine-tuning.