🤖 AI Summary
Cross-domain few-shot object detection (CD-FSOD) suffers from weak cross-domain generalization of base models and heavy reliance on costly retraining.
Method: We propose a lightweight “Augment–Search” two-stage adaptation framework: (1) multi-strategy image augmentation—integrating CutMix, Mosaic, and ColorJitter—to enrich source-domain representation; (2) a differentiable, grid-based subdomain search algorithm that efficiently locates the optimal adaptation subdomain within the parameter space of the grounding model (GroundingDINO), enabling fine-tuning-free dynamic domain transfer.
Contribution/Results: This work pioneers the coupling of structured subdomain search with data augmentation, establishing a lightweight parameter-space navigation paradigm that eliminates dependence on retraining. Our method achieves an average +5.2% mAP improvement across multiple CD-FSOD benchmarks, supports zero-shot domain transfer, and significantly reduces deployment overhead. Code is publicly available.
📝 Abstract
Foundation models pretrained on extensive datasets, such as GroundingDINO and LAE-DINO, have performed remarkably in the cross-domain few-shot object detection (CD-FSOD) task. Through rigorous few-shot training, we found that the integration of image-based data augmentation techniques and grid-based sub-domain search strategy significantly enhances the performance of these foundation models. Building upon GroundingDINO, we employed several widely used image augmentation methods and established optimization objectives to effectively navigate the expansive domain space in search of optimal sub-domains. This approach facilitates efficient few-shot object detection and introduces an approach to solving the CD-FSOD problem by efficiently searching for the optimal parameter configuration from the foundation model. Our findings substantially advance the practical deployment of vision-language models in data-scarce environments, offering critical insights into optimizing their cross-domain generalization capabilities without labor-intensive retraining. Code is available at https://github.com/jaychempan/ETS.