🤖 AI Summary
Robustness of classification models in high-energy physics discovery remains a critical challenge, particularly under adversarial conditions. Method: This paper proposes an efficient adversarial attack method tailored for black-box and gray-box settings, designed to achieve maximal misclassification rates with minimal perturbations. It integrates multi-round adaptive gradient updates, diverse random initializations, and cross-sample mixing augmentation—leveraging the differentiable structure of target models to enhance gradient estimation quality and attack transferability. Contribution/Results: The proposed method achieves state-of-the-art performance in both perturbation magnitude (ℓ₂/ℓ∞ norm) and attack success rate, outperforming existing approaches. It ranked first in the ECML-PKDD 2025 Adversarial Attack Competition. This work provides a scalable, computationally efficient benchmark attack tool for model security assessment in scientific machine learning.
📝 Abstract
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The task required designing an adversarial attack against a provided classification model that maximizes misclassification while minimizing perturbations. Our approach employs a multi-round gradient-based strategy that leverages the differentiable structure of the model, augmented with random initialization and sample-mixing techniques to enhance effectiveness. The resulting attack achieved the best results in perturbation size and fooling success rate, securing first place in the competition.