Class-invariant Test-Time Augmentation for Domain Generalization

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distribution shift severely degrades the cross-domain performance of deep models, while existing domain generalization (DG) methods often rely on multi-source domain training or computationally expensive test-time adaptation. To address this, we propose a lightweight, training-free test-time augmentation method: during inference, input images are transformed via elastic deformation and grid warping to generate class-invariant variants; high-confidence predictions are then dynamically selected based on model confidence scores for ensemble aggregation. Our approach introduces only minimal deformable augmentations and a confidence-guided fusion mechanism—requiring no additional training, and offering plug-and-play compatibility with diverse DG algorithms and backbone architectures. Evaluated on PACS and Office-Home benchmarks, it consistently improves the cross-domain generalization of multiple state-of-the-art models, yielding average accuracy gains of 2.1–4.7%, and significantly enhancing robustness to unseen domains.

Technology Category

Application Category

📝 Abstract
Deep models often suffer significant performance degradation under distribution shifts. Domain generalization (DG) seeks to mitigate this challenge by enabling models to generalize to unseen domains. Most prior approaches rely on multi-domain training or computationally intensive test-time adaptation. In contrast, we propose a complementary strategy: lightweight test-time augmentation. Specifically, we develop a novel Class-Invariant Test-Time Augmentation (CI-TTA) technique. The idea is to generate multiple variants of each input image through elastic and grid deformations that nevertheless belong to the same class as the original input. Their predictions are aggregated through a confidence-guided filtering scheme that remove unreliable outputs, ensuring the final decision relies on consistent and trustworthy cues. Extensive Experiments on PACS and Office-Home datasets demonstrate consistent gains across different DG algorithms and backbones, highlighting the effectiveness and generality of our approach.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance drop in deep models under distribution shifts
Proposes lightweight test-time augmentation for domain generalization
Generates class-invariant image variants with confidence-based filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class-invariant elastic and grid deformations
Confidence-guided filtering for aggregation
Lightweight test-time augmentation technique
🔎 Similar Papers
No similar papers found.
Z
Zhicheng Lin
School of Computing and Artificial Intelligence, Southwest Jiaotong University, China
Xiaolin Wu
Xiaolin Wu
Professor of Electrical and Computer Engineering, McMaster University
image processingcompressionquantizationmultimedia codingalgorithms
X
Xi Zhang
ANGEL Lab, Nanyang Technological University, Singapore