🤖 AI Summary
To address the insufficient adversarial robustness of image quality assessment (IQA) metrics, this work establishes the first adversarial defense benchmark specifically for IQA, systematically evaluating 25 defense methods under 14 non-adaptive and adaptive attacks. We propose an IQA-specific adversarial defense evaluation framework and introduce, for the first time, a dual-objective evaluation criterion that jointly considers IQA score fidelity and perceptual image quality preservation. Experiments encompass mainstream attacks—including PGD, CW, and PatchAttack—as well as representative IQA metrics such as LPIPS, NIQE, and BRISQUE. Results reveal that most existing defenses significantly degrade either score consistency or visual quality; only three defense categories achieve a favorable trade-off between robustness and fidelity. The benchmark platform is publicly released with support for continuous updates and community submission of new methods.
📝 Abstract
In the field of Image Quality Assessment (IQA), the adversarial robustness of the metrics poses a critical concern. This paper presents a comprehensive benchmarking study of various defense mechanisms in response to the rise in adversarial attacks on IQA. We systematically evaluate 25 defense strategies, including adversarial purification, adversarial training, and certified robustness methods. We applied 14 adversarial attack algorithms of various types in both non-adaptive and adaptive settings and tested these defenses against them. We analyze the differences between defenses and their applicability to IQA tasks, considering that they should preserve IQA scores and image quality. The proposed benchmark aims to guide future developments and accepts submissions of new methods, with the latest results available online: https://videoprocessing.ai/benchmarks/iqa-defenses.html.