🤖 AI Summary
This work proposes a biologically inspired active vision strategy that addresses the computational redundancy and lack of selective attention in conventional image classification models, which typically process entire images uniformly. Drawing inspiration from human saccadic eye movements, the method leverages attention maps generated by a self-supervised DINO Vision Transformer to guide sequential, gaze-like fixations on informative regions of an image. Evaluated on ImageNet, this approach achieves competitive or superior classification accuracy while processing significantly fewer image regions compared to full-image inference. Furthermore, the model outperforms traditional saliency-based methods in predicting human fixation locations, demonstrating its effectiveness and novelty in enabling efficient, attention-driven visual perception.
📝 Abstract
Human vision achieves remarkable perceptual performance while operating under strict metabolic constraints. A key ingredient is the selective attention mechanism, driven by rapid saccadic eye movements that constantly reposition the high-resolution fovea onto task-relevant locations, unlike conventional AI systems that process entire images with equal emphasis. Our work aims to draw inspiration from the human visual system to create smarter, more efficient image processing models. Using DINO, a self-supervised Vision Transformer that produces attention maps strikingly similar to human gaze patterns, we explore a saccade inspired method to focus the processing of information on key regions in visual space. To do so, we use the ImageNet dataset in a standard classification task and measure how each successive saccade affects the model's class scores. This selective-processing strategy preserves most of the full-image classification performance and can even outperform it in certain cases. By benchmarking against established saliency models built for human gaze prediction, we demonstrate that DINO provides superior fixation guidance for selecting informative regions. These findings highlight Vision Transformer attention as a promising basis for biologically inspired active vision and open new directions for efficient, neuromorphic visual processing.