🤖 AI Summary
Existing domain generalization (DG) benchmarks lack sufficient challenge due to contamination by large-scale pretraining data, rendering them inadequate for evaluating foundational models (e.g., CLIP) on truly unseen domains. Crucially, enforcing domain invariance without preserving domain-specific information may impair generalization. Method: We propose *domain-aware enhancement*, a novel paradigm advocating explicit modeling and disentanglement of domain-specific and class-discriminative features as a prerequisite for robust DG. Within the CLIP framework, we introduce a dedicated domain head, synthesize diverse domain-shifted data, perform domain-aware representation enhancement, and enforce feature disentanglement. Results: Extensive experiments across 33 heterogeneous datasets demonstrate that our approach significantly outperforms state-of-the-art DG methods, especially under severe distribution shifts—achieving substantial gains in out-of-distribution generalization performance.
📝 Abstract
Evaluating domain generalization (DG) for foundational models like CLIP is challenging, as web-scale pretraining data potentially covers many existing benchmarks. Consequently, current DG evaluation may neither be sufficiently challenging nor adequately test genuinely unseen data scenarios. To better assess the performance of CLIP on DG in-the-wild, a scenario where CLIP encounters challenging unseen data, we consider two approaches: (1) evaluating on 33 diverse datasets with quantified out-of-distribution (OOD) scores after fine-tuning CLIP on ImageNet, and (2) using unlearning to make CLIP `forget' some domains as an approximation. We observe that CLIP's performance deteriorates significantly on more OOD datasets. To address this, we present CLIP-DCA (Disentangling Classification from enhanced domain Aware representations). Our approach is motivated by the observation that while standard domain invariance losses aim to make representations domain-invariant, this can be harmful to foundation models by forcing the discarding of domain-aware representations beneficial for generalization. We instead hypothesize that enhancing domain awareness is a prerequisite for effective domain-invariant classification in foundation models. CLIP-DCA identifies and enhances domain awareness within CLIP's encoders using a separate domain head and synthetically generated diverse domain data. Simultaneously, it encourages domain-invariant classification through disentanglement from the domain features. CLIP-DCA shows significant improvements within this challenging evaluation compared to existing methods, particularly on datasets that are more OOD.