EVA: Bridging Performance and Human Alignment in Hard-Attention Vision Models for Image Classification

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing solely for classification accuracy often compromises the alignment between visual models and human gaze patterns, thereby undermining interpretability. This work proposes a neuroscience-inspired hard attention mechanism that jointly optimizes classification performance and human alignment without requiring eye-tracking supervision. By integrating sequential fixations, center-surround representations, variance control, and adaptive gating, the method explicitly models and balances the trade-off between task accuracy and human-like scanpaths for the first time. Evaluated on CIFAR-10, the approach maintains competitive classification accuracy while significantly improving consistency with human eye movements, as measured by NSS and DTW metrics. Its scalability and generalization are further demonstrated on ImageNet-100 and COCO-Search18 benchmarks.
📝 Abstract
Optimizing vision models purely for classification accuracy can impose an alignment tax, degrading human-like scanpaths and limiting interpretability. We introduce EVA, a neuroscience-inspired hard-attention mechanistic testbed that makes the performance-human-likeness trade-off explicit and adjustable. EVA samples a small number of sequential glimpses using a minimal fovea-periphery representation with CNN-based feature extractor and integrates variance control and adaptive gating to stabilize and regulate attention dynamics. EVA is trained with the standard classification objective without gaze supervision. On CIFAR-10 with dense human gaze annotations, EVA improves scanpath alignment under established metrics such as DTW, NSS, while maintaining competitive accuracy. Ablations show that CNN-based feature extraction drives accuracy but suppresses human-likeness, whereas variance control and gating restore human-aligned trajectories with minimal performance loss. We further validate EVA's scalability on ImageNet-100 and evaluate scanpath alignment on COCO-Search18 without COCO-Search18 gaze supervision or finetuning, where EVA yields human-like scanpaths on natural scenes without additional training. Overall, EVA provides a principled framework for trustworthy, human-interpretable active vision.
Problem

Research questions and friction points this paper is trying to address.

human alignment
scanpath
interpretability
hard-attention
vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

hard-attention
human alignment
scanpath modeling
variance control
adaptive gating
🔎 Similar Papers
2024-08-29arXiv.orgCitations: 7
Pengcheng Pan
Pengcheng Pan
The University of Tokyo
artificial intelligentactive vision
Y
Yonekura Shogo
The University of Tokyo
K
Kuniyoshi Yasuo
The University of Tokyo