🤖 AI Summary
This study systematically evaluates the applicability and transferability of vision foundation models (e.g., DINOv2, CLIP) to face recognition (FR), spanning ultra-low-resource (1K identities) to large-scale training regimes. To address their suboptimal zero-shot performance on FR, we propose a lightweight supervised fine-tuning paradigm coupled with cross-domain adaptation—and, for the first time, empirically demonstrate that synthetic face data significantly boosts few-shot FR performance. Comprehensive evaluation across multiple benchmarks using ViT-S/L architectures shows that fine-tuned DINOv2-ViT-S achieves 87.10% verification accuracy under the 1K-identity setting—substantially outperforming its non-fine-tuned counterpart (64.70%) and end-to-end training from scratch (69.96%). Under large-scale training, it attains 96.03% accuracy, with lower computational cost and superior generalization. Our core contribution lies in revealing that FR demands precise, task-specific adaptation of foundation models, and in establishing an efficient, scalable fine-tuning framework enhanced by synthetic data.
📝 Abstract
Foundation models are predominantly trained in an unsupervised or self-supervised manner on highly diverse and large-scale datasets, making them broadly applicable to various downstream tasks. In this work, we investigate for the first time whether such models are suitable for the specific domain of face recognition (FR). We further propose and demonstrate the adaptation of these models for FR across different levels of data availability, including synthetic data. Extensive experiments are conducted on multiple foundation models and datasets of varying scales for training and fine-tuning, with evaluation on a wide range of benchmarks. Our results indicate that, despite their versatility, pre-trained foundation models tend to underperform in FR in comparison with similar architectures trained specifically for this task. However, fine-tuning foundation models yields promising results, often surpassing models trained from scratch, particularly when training data is limited. For example, after fine-tuning only on 1K identities, DINOv2 ViT-S achieved average verification accuracy on LFW, CALFW, CPLFW, CFP-FP, and AgeDB30 benchmarks of 87.10%, compared to 64.70% achieved by the same model and without fine-tuning. While training the same model architecture, ViT-S, from scratch on 1k identities reached 69.96%. With access to larger-scale FR training datasets, these performances reach 96.03% and 95.59% for the DINOv2 and CLIP ViT-L models, respectively. In comparison to the ViT-based architectures trained from scratch for FR, fine-tuned same architectures of foundation models achieve similar performance while requiring lower training computational costs and not relying on the assumption of extensive data availability. We further demonstrated the use of synthetic face data, showing improved performances over both pre-trained foundation and ViT models.