MMARD: Improving the Min-Max Optimization Process in Adversarial Robustness Distillation

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial robustness distillation (ARD) methods suffer from two critical limitations: (1) inner-loop adversarial examples are generated far from the teacher’s decision boundary, leading to loss of essential robustness information; and (2) outer-loop natural and robust learning are decoupled, causing robustness saturation and strong dependence on specific teachers. To address these issues, we propose a Min-Max distillation framework featuring synergistic inner-outer optimization. In the inner loop, a teacher-guided boundary alignment mechanism—leveraging the teacher’s robust predictions—drives adversarial examples toward the decision boundary. In the outer loop, we introduce a natural–robust–teacher tripartite mutual information structure to unify knowledge modeling across multiple scenarios. This is the first ARD method to jointly optimize both loops. Evaluated on CIFAR-10/100 and other benchmarks, it achieves state-of-the-art robust accuracy, exhibits plug-and-play compatibility, significantly alleviates robustness saturation, and reduces teacher dependency.

Technology Category

Application Category

📝 Abstract
Adversarial Robustness Distillation (ARD) is a promising task to boost the robustness of small-capacity models with the guidance of the pre-trained robust teacher. The ARD can be summarized as a min-max optimization process, i.e., synthesizing adversarial examples (inner)&training the student (outer). Although competitive robustness performance, existing ARD methods still have issues. In the inner process, the synthetic training examples are far from the teacher's decision boundary leading to important robust information missing. In the outer process, the student model is decoupled from learning natural and robust scenarios, leading to the robustness saturation, i.e., student performance is highly susceptible to customized teacher selection. To tackle these issues, this paper proposes a general Min-Max optimization Adversarial Robustness Distillation (MMARD) method. For the inner process, we introduce the teacher's robust predictions, which drive the training examples closer to the teacher's decision boundary to explore more robust knowledge. For the outer process, we propose a structured information modeling method based on triangular relationships to measure the mutual information of the model in natural and robust scenarios and enhance the model's ability to understand multi-scenario mapping relationships. Experiments show our MMARD achieves state-of-the-art performance on multiple benchmarks. Besides, MMARD is plug-and-play and convenient to combine with existing methods.
Problem

Research questions and friction points this paper is trying to address.

Improves adversarial robustness distillation via min-max optimization.
Addresses issues in synthetic training examples and decision boundaries.
Enhances model understanding of multi-scenario mapping relationships.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses teacher's robust predictions for inner optimization
Introduces structured information modeling for outer optimization
Enhances multi-scenario mapping understanding in models
🔎 Similar Papers
No similar papers found.
Yuzheng Wang
Yuzheng Wang
Fudan University
Knowledge DistillationVision-Language ModelAIGC
Zhaoyu Chen
Zhaoyu Chen
TikTok
AI SecurityTrustworthy AIMultimodal AIGenerative AI
Dingkang Yang
Dingkang Yang
ByteDance
Multimodal LearningGenerative AIEmbodied AI
Y
Yuanhang Wang
Shanghai Engineering Research Center of AI & Robotics, Academy for Engineering & Technology, Fudan University
L
Lizhe Qi
Engineering Research Center of AI & Robotics, Ministry of Education, Academy for Engineering & Technology, Fudan University