Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high cost and low efficiency of manual annotation in ecological image analysis by proposing a label-free, species-level automatic clustering method. The authors establish the first systematic benchmark framework to evaluate zero-shot clustering performance across 60 species using five Vision Transformer models—including DINOv2 and DINOv3—combined with various dimensionality reduction and clustering algorithms. Experimental results demonstrate that the proposed approach achieves a V-measure of 0.958 (0.943 under fully unsupervised settings) at the species level, requiring expert review of only 1.14% of images. Moreover, the method effectively uncovers ecologically meaningful intraspecific structures, such as variations in sex, age, and coat color. To support broader adoption, the authors release an open-source toolkit enabling ecologists to efficiently analyze large-scale wildlife image datasets.

Technology Category

Application Category

📝 Abstract
Manual labeling of animal images remains a significant bottleneck in ecological research, limiting the scale and efficiency of biodiversity monitoring efforts. This study investigates whether state-of-the-art Vision Transformer (ViT) foundation models can reduce thousands of unlabeled animal images directly to species-level clusters. We present a comprehensive benchmarking framework evaluating five ViT models combined with five dimensionality reduction techniques and four clustering algorithms, two supervised and two unsupervised, across 60 species (30 mammals and 30 birds), with each test using a random subset of 200 validated images per species. We investigate when clustering succeeds at species-level, where it fails, and whether clustering within the species-level reveals ecologically meaningful patterns such as sex, age, or phenotypic variation. Our results demonstrate near-perfect species-level clustering (V-measure: 0.958) using DINOv3 embeddings with t-SNE and supervised hierarchical clustering methods. Unsupervised approaches achieve competitive performance (0.943) while requiring no prior species knowledge, rejecting only 1.14% of images as outliers requiring expert review. We further demonstrate robustness to realistic long-tailed distributions of species and show that intentional over-clustering can reliably extract intra-specific variation including age classes, sexual dimorphism, and pelage differences. We introduce an open-source benchmarking toolkit and provide recommendations for ecologists to select appropriate methods for sorting their specific taxonomic groups and data.
Problem

Research questions and friction points this paper is trying to address.

Zero-Shot Clustering
Animal Image Analysis
Species-Level Clustering
Intra-specific Variation
Biodiversity Monitoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Transformers
zero-shot clustering
species-level clustering
intra-specific variation
benchmarking framework
🔎 Similar Papers
No similar papers found.
H
Hugo Markoff
Department of Chemistry and Bioscience, Aalborg University, Aalborg, Denmark
Stefan Hein Bengtson
Stefan Hein Bengtson
Aalborg University
computer visionmachine learningroboticsaffordance detectionsemi-autonomous control
M
Michael Orsted
Department of Chemistry and Bioscience, Aalborg University, Aalborg, Denmark