Responses Fall Short of Understanding: Revealing the Gap between Internal Representations and Responses in Visual Document Understanding

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in large vision-language models (VLMs) for visual document understanding: their generated responses do not always faithfully reflect their internal comprehension. The study systematically uncovers, for the first time, a notable inconsistency between model outputs and internal representations. By employing linear probing to analyze semantic representations across different layers, the authors reveal that intermediate layers often encode more accurate task-relevant information than the final output layer. Building on this insight, they propose a novel fine-tuning strategy that explicitly leverages intermediate-layer representations. This approach substantially improves both linear probe accuracy and end-task answer accuracy, effectively bridging the discrepancy between internal understanding and external generation. The findings offer a promising new direction for enhancing the reliability and trustworthiness of VLMs in complex reasoning tasks.
📝 Abstract
Visual document understanding (VDU) is a challenging task for large vision language models (LVLMs), requiring the integration of visual perception, text recognition, and reasoning over structured layouts. Although recent LVLMs have shown progress on VDU benchmarks, their performance is typically evaluated based on generated responses, which may not necessarily reflect whether the model has actually captured the required information internally. In this paper, we investigate how information required to solve VDU tasks is represented across different layers of LLMs within LVLMs using linear probing. Our study reveals that (1) there is a clear gap between internal representations and generated responses, and (2) information required to solve the task is often encoded more linearly from intermediate layers than from the final layer. Motivated by these findings, we explore fine-tuning strategies that target intermediate layers. Experiments show that fine-tuning intermediate layers improves both linear probing accuracy and response accuracy while narrowing the gap.
Problem

Research questions and friction points this paper is trying to address.

visual document understanding
large vision language models
internal representations
response accuracy
linear probing
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual document understanding
linear probing
intermediate layer fine-tuning
internal representation
vision-language models
🔎 Similar Papers
H
Haruka Kawasaki
Human Informatics Labs., NTT, Inc.
R
Ryota Tanaka
Human Informatics Labs., NTT, Inc.
Kyosuke Nishida
Kyosuke Nishida
NTT Human Informatics Laboratories, NTT Corporation
natural language processingvision and languageartificial intelligencedata mining