🤖 AI Summary
To address the limited visual perception capability of multimodal large language models (MLLMs), this paper proposes a novel framework that dynamically adjusts visual token resolution within a single forward pass—inspired by human “saccadic” visual scanning. The method comprises two key components: (1) a layer-wise, attention-guided saliency scanning strategy that adaptively focuses computation on semantically critical regions; and (2) a plug-and-play Token Super-Resolution (TokenSR) module enabling dynamic expansion or pruning of token-level computational resources. Crucially, the approach requires no additional training or architectural modification, enhancing visual representation quality efficiently during inference. Evaluated on multiple vision-language understanding benchmarks—including MMBench, OCRBench, and TextVQA—the method consistently outperforms strong baselines, demonstrating that dynamic resolution control meaningfully improves both visual perception fidelity and downstream multimodal reasoning performance.
📝 Abstract
Multimodal large language models (MLLMs) have achieved remarkable progress on various vision-language tasks, yet their visual perception remains limited. Humans, in comparison, perceive complex scenes efficiently by dynamically scanning and focusing on salient regions in a sequential "blink-like" process. Motivated by this strategy, we first investigate whether MLLMs exhibit similar behavior. Our pilot analysis reveals that MLLMs naturally attend to different visual regions across layers and that selectively allocating more computation to salient tokens can enhance visual perception. Building on this insight, we propose Blink, a dynamic visual token resolution framework that emulates the human-inspired process within a single forward pass. Specifically, Blink includes two modules: saliency-guided scanning and dynamic token resolution. It first estimates the saliency of visual tokens in each layer based on the attention map, and extends important tokens through a plug-and-play token super-resolution (TokenSR) module. In the next layer, it drops the extended tokens when they lose focus. This dynamic mechanism balances broad exploration and fine-grained focus, thereby enhancing visual perception adaptively and efficiently. Extensive experiments validate Blink, demonstrating its effectiveness in enhancing visual perception and multimodal understanding.