Colliding with Adversaries at ECML-PKDD 2025 Adversarial Attack Competition 1st Prize Solution

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robustness of classification models in high-energy physics discovery remains a critical challenge, particularly under adversarial conditions. Method: This paper proposes an efficient adversarial attack method tailored for black-box and gray-box settings, designed to achieve maximal misclassification rates with minimal perturbations. It integrates multi-round adaptive gradient updates, diverse random initializations, and cross-sample mixing augmentation—leveraging the differentiable structure of target models to enhance gradient estimation quality and attack transferability. Contribution/Results: The proposed method achieves state-of-the-art performance in both perturbation magnitude (ℓ₂/ℓ∞ norm) and attack success rate, outperforming existing approaches. It ranked first in the ECML-PKDD 2025 Adversarial Attack Competition. This work provides a scalable, computationally efficient benchmark attack tool for model security assessment in scientific machine learning.

Technology Category

Application Category

📝 Abstract
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The task required designing an adversarial attack against a provided classification model that maximizes misclassification while minimizing perturbations. Our approach employs a multi-round gradient-based strategy that leverages the differentiable structure of the model, augmented with random initialization and sample-mixing techniques to enhance effectiveness. The resulting attack achieved the best results in perturbation size and fooling success rate, securing first place in the competition.
Problem

Research questions and friction points this paper is trying to address.

Design adversarial attacks to maximize model misclassification
Minimize perturbation size while ensuring attack effectiveness
Leverage gradient-based strategies against physics classification models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-round gradient-based adversarial attack strategy
Leverages differentiable model structure for optimization
Augmented with random initialization and sample-mixing techniques
🔎 Similar Papers
No similar papers found.
D
Dimitris Stefanopoulos
Aristotle University of Thessaloniki
Andreas Voskou
Andreas Voskou
Boltzmann Research
Machine LearningDeep Learning