🤖 AI Summary
This paper addresses the research gap in applying Contrastive Language–Image Pretraining (CLIP) to domain generalization (DG) and domain adaptation (DA). Methodologically, it establishes a unified taxonomy by systematically analyzing prevailing paradigms—including prompt optimization, backbone feature reuse, and source-available/source-free transfer—thereby elucidating CLIP’s zero-shot cross-domain transfer mechanisms and pathways to enhanced robustness. The analysis identifies three critical bottlenecks: overfitting, insufficient domain diversity, and computational inefficiency. To overcome these, the work integrates key techniques such as prompt learning, feature alignment, knowledge distillation, and domain-invariant representation learning. The contributions include both a rigorous methodological framework for CLIP-based DG/DA and actionable insights into architectural design and training strategies. Collectively, this study provides theoretical foundations and practical guidelines for developing more generalizable and deployable cross-domain vision models.
📝 Abstract
As machine learning evolves, domain generalization (DG) and domain adaptation (DA) have become crucial for enhancing model robustness across diverse environments. Contrastive Language-Image Pretraining (CLIP) plays a significant role in these tasks, offering powerful zero-shot capabilities that allow models to perform effectively in unseen domains. However, there remains a significant gap in the literature, as no comprehensive survey currently exists that systematically explores the applications of CLIP in DG and DA, highlighting the necessity for this review. This survey presents a comprehensive review of CLIP's applications in DG and DA. In DG, we categorize methods into optimizing prompt learning for task alignment and leveraging CLIP as a backbone for effective feature extraction, both enhancing model adaptability. For DA, we examine both source-available methods utilizing labeled source data and source-free approaches primarily based on target domain data, emphasizing knowledge transfer mechanisms and strategies for improved performance across diverse contexts. Key challenges, including overfitting, domain diversity, and computational efficiency, are addressed, alongside future research opportunities to advance robustness and efficiency in practical applications. By synthesizing existing literature and pinpointing critical gaps, this survey provides valuable insights for researchers and practitioners, proposing directions for effectively leveraging CLIP to enhance methodologies in domain generalization and adaptation. Ultimately, this work aims to foster innovation and collaboration in the quest for more resilient machine learning models that can perform reliably across diverse real-world scenarios. A more up-to-date version of the papers is maintained at: https://github.com/jindongli-Ai/Survey_on_CLIP-Powered_Domain_Generalization_and_Adaptation.