Adversarial Robustness in One-Stage Learning-to-Defer

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning-to-Defer (L2D) in hybrid decision systems is vulnerable to adversarial perturbations, which can simultaneously corrupt both model predictions and deferral decisions; existing approaches are restricted to two-stage frameworks and lack formal robustness guarantees. Method: We propose the first end-to-end, single-stage adversarially robust L2D framework. It jointly optimizes the predictor and the deferrer, formalizes an adversarial attack model targeting both components, and introduces a cost-sensitive adversarial surrogate loss. Theoretical analysis establishes robustness guarantees under Huber (H), (R,F)-robustness, and Bayesian consistency. Contribution/Results: Extensive experiments demonstrate significant improvements in robustness against both untargeted and targeted adversarial attacks on classification and regression tasks, while preserving original predictive performance—without degradation in clean accuracy.

Technology Category

Application Category

📝 Abstract
Learning-to-Defer (L2D) enables hybrid decision-making by routing inputs either to a predictor or to external experts. While promising, L2D is highly vulnerable to adversarial perturbations, which can not only flip predictions but also manipulate deferral decisions. Prior robustness analyses focus solely on two-stage settings, leaving open the end-to-end (one-stage) case where predictor and allocation are trained jointly. We introduce the first framework for adversarial robustness in one-stage L2D, covering both classification and regression. Our approach formalizes attacks, proposes cost-sensitive adversarial surrogate losses, and establishes theoretical guarantees including $mathcal{H}$, $(mathcal{R }, mathcal{F})$, and Bayes consistency. Experiments on benchmark datasets confirm that our methods improve robustness against untargeted and targeted attacks while preserving clean performance.
Problem

Research questions and friction points this paper is trying to address.

Analyzes adversarial vulnerability in joint predictor-deferral training systems
Develops robustness framework for one-stage learning-to-defer classification and regression
Proposes adversarial defenses with theoretical guarantees against manipulation attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial robustness framework for one-stage L2D
Cost-sensitive adversarial surrogate losses proposed
Theoretical guarantees for classification and regression established
🔎 Similar Papers
No similar papers found.
Yannis Montreuil
Yannis Montreuil
PhD Candidate
Machine LearningStatistical LearningHuman-AI collaboration
L
Letian Yu
School of Computing, National University of Singapore, Singapore, 118431, Singapore
Axel Carlier
Axel Carlier
ISAE-SUPAERO
AIMultimedia
L
Lai Xing Ng
Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore, 138632, Singapore
Wei Tsang Ooi
Wei Tsang Ooi
National University of Singapore
Multimedia SystemsInteractive SystemsIntelligent Systems