A saccade-inspired approach to image classification using visiontransformer attention maps

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a biologically inspired active vision strategy that addresses the computational redundancy and lack of selective attention in conventional image classification models, which typically process entire images uniformly. Drawing inspiration from human saccadic eye movements, the method leverages attention maps generated by a self-supervised DINO Vision Transformer to guide sequential, gaze-like fixations on informative regions of an image. Evaluated on ImageNet, this approach achieves competitive or superior classification accuracy while processing significantly fewer image regions compared to full-image inference. Furthermore, the model outperforms traditional saliency-based methods in predicting human fixation locations, demonstrating its effectiveness and novelty in enabling efficient, attention-driven visual perception.

Technology Category

Application Category

📝 Abstract
Human vision achieves remarkable perceptual performance while operating under strict metabolic constraints. A key ingredient is the selective attention mechanism, driven by rapid saccadic eye movements that constantly reposition the high-resolution fovea onto task-relevant locations, unlike conventional AI systems that process entire images with equal emphasis. Our work aims to draw inspiration from the human visual system to create smarter, more efficient image processing models. Using DINO, a self-supervised Vision Transformer that produces attention maps strikingly similar to human gaze patterns, we explore a saccade inspired method to focus the processing of information on key regions in visual space. To do so, we use the ImageNet dataset in a standard classification task and measure how each successive saccade affects the model's class scores. This selective-processing strategy preserves most of the full-image classification performance and can even outperform it in certain cases. By benchmarking against established saliency models built for human gaze prediction, we demonstrate that DINO provides superior fixation guidance for selecting informative regions. These findings highlight Vision Transformer attention as a promising basis for biologically inspired active vision and open new directions for efficient, neuromorphic visual processing.
Problem

Research questions and friction points this paper is trying to address.

saccade
selective attention
image classification
Vision Transformer
active vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

saccade-inspired
Vision Transformer
attention maps
active vision
efficient image classification
🔎 Similar Papers
No similar papers found.
M
Matthis Dallain
Institut de Neurosciences de la Timone, Aix-Marseille Université, CNRS, Marseille, 13005, France
L
Laurent Rodriguez
Laboratoire d’Électronique, Antennes et Télécommunications, Université Côte d’Azur, CNRS, Sophia Antipolis, 06903, France
L
Laurent Udo Perrinet
Institut de Neurosciences de la Timone, Aix-Marseille Université, CNRS, Marseille, 13005, France
Benoît Miramond
Benoît Miramond
Full professor, Université Côte d'Azur, LEAT
brain-inspired computingartificial neural networksneurosciencebio-inspired Artificial IntelligenceNeuromorphic systems