BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language model (MLLM) evaluations overemphasize linguistic reasoning while neglecting deep visual perceptual reasoning. Method: We propose BLINK-Twice—the first benchmark explicitly designed to assess fine-grained visual perception and analytical reasoning, enforcing strict image-only reliance to enable a cognitive shift from “seeing” to “observing.” It comprises seven categories of visual challenge tasks, naturally adversarial image pairs, and structured reasoning-chain annotations. Innovatively, it integrates visual attention focusing, iterative observation, and active interaction mechanisms, augmented by chain-of-thought prompting and self-critique strategies. Contribution/Results: Evaluated on 20 state-of-the-art MLLMs, our benchmark reveals that conventional language-centric reasoning improvements yield only marginal and unstable gains, whereas visual interaction mechanisms consistently enhance performance—demonstrating the necessity of a vision-centered reasoning paradigm and shifting evaluation from outcome-oriented to process-interpretable assessment.

Technology Category

Application Category

📝 Abstract
Recently, Multimodal Large Language Models (MLLMs) have made rapid progress, particularly in enhancing their reasoning capabilities. However, existing reasoning benchmarks still primarily assess language-based reasoning, often treating visual input as replaceable context. To address this gap, we introduce BLINK-Twice, a vision-centric reasoning benchmark grounded in challenging perceptual tasks. Instead of relying on external knowledge, our tasks require models to reason from visual content alone, shifting the focus from language-based to image-grounded reasoning. Compared to prior perception benchmarks, it moves beyond shallow perception ("see") and requires fine-grained observation and analytical reasoning ("observe"). BLINK-Twice integrates three core components: seven types of visual challenges for testing visual reasoning, natural adversarial image pairs that enforce reliance on visual content, and annotated reasoning chains for fine-grained evaluation of the reasoning process rather than final answers alone. We evaluate 20 leading MLLMs, including 12 foundation models and 8 reasoning-enhanced models. BLINK-Twice poses a significant challenge to current models. While existing reasoning strategies in the language space-such as chain-of-thought or self-criticism can improve performance, they often result in unstable and redundant reasoning. We observe that repeated image observation improves performance across models, and active visual interaction, as demonstrated by models like o3, highlights the need for a new paradigm for vision reasoning. The dataset is publicly available at https://github.com/PicoTrex/BLINK-Twice
Problem

Research questions and friction points this paper is trying to address.

Assesses visual reasoning beyond language-based benchmarks
Requires fine-grained observation and analytical visual perception
Evaluates reasoning process through annotated chains and adversarial images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-centric benchmark for perceptual reasoning tasks
Natural adversarial image pairs enforce visual reliance
Annotated reasoning chains enable fine-grained evaluation
🔎 Similar Papers
No similar papers found.