🤖 AI Summary
To address the limited transferability of black-box adversarial attacks, this paper pioneers the integration of domain generalization into adversarial example generation, proposing a novel domain-generalization-driven ensemble attack paradigm. To overcome two key limitations of existing methods—insufficient inter-model gradient sharing and static weight assignment—we introduce a consensus gradient direction synthesis mechanism and a dual-harmonized weighting strategy, jointly optimizing intra-model gradient consistency and inter-model gradient diversity. The method integrates singular value decomposition, gradient consistency regularization, and dynamic weighted ensemble. Extensive experiments on ImageNet and CIFAR-10 across heterogeneous architectures—including ResNet, VGG, and ViT—demonstrate an average 12.7% improvement in transfer-based attack success rate over state-of-the-art methods.
📝 Abstract
The development of model ensemble attacks has significantly improved the transferability of adversarial examples, but this progress also poses severe threats to the security of deep neural networks. Existing methods, however, face two critical challenges: insufficient capture of shared gradient directions across models and a lack of adaptive weight allocation mechanisms. To address these issues, we propose a novel method Harmonized Ensemble for Adversarial Transferability (HEAT), which introduces domain generalization into adversarial example generation for the first time. HEAT consists of two key modules: Consensus Gradient Direction Synthesizer, which uses Singular Value Decomposition to synthesize shared gradient directions; and Dual-Harmony Weight Orchestrator which dynamically balances intra-domain coherence, stabilizing gradients within individual models, and inter-domain diversity, enhancing transferability across models. Experimental results demonstrate that HEAT significantly outperforms existing methods across various datasets and settings, offering a new perspective and direction for adversarial attack research.