Multi-Task Models Adversarial Attacks

📅 2023-05-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the adversarial vulnerability of multi-task learning (MTL) models in visual understanding, focusing on three aspects: (1) cross-task transferability of single-task attacks, (2) feasibility of coordinated multi-task attacks, and (3) dual impact of parameter sharing on robustness. To address these, we propose Gradient-Balanced Multi-Task Attack (GB-MTA), the first framework formulating adversarial attack as an integer linear programming (ILP) optimization problem based on average relative loss change—integrating white-box attack principles, gradient balancing, and ILP approximation. Evaluated on NYUv2 and Tiny-Taxonomy benchmarks, GB-MTA significantly improves attack success rates against both standard and adversarially trained MTL models. Empirical analysis reveals that while parameter sharing enhances task accuracy, it concurrently exacerbates cross-task attack transferability. This work establishes a theoretical foundation and provides practical tools for security assessment and robust design of MTL systems.
📝 Abstract
Multi-Task Learning (MTL) involves developing a singular model, known as a multi-task model, to concurrently perform multiple tasks. While the security of single-task models has been thoroughly studied, multi-task models pose several critical security questions, such as 1) their vulnerability to single-task adversarial attacks, 2) the possibility of designing attacks that target multiple tasks, and 3) the impact of task sharing and adversarial training on their resilience to such attacks. This paper addresses these queries through detailed analysis and rigorous experimentation. First, we explore the adaptation of single-task white-box attacks to multi-task models and identify their limitations. We then introduce a novel attack framework, the Gradient Balancing Multi-Task Attack (GB-MTA), which treats attacking a multi-task model as an optimization problem. This problem, based on averaged relative loss change across tasks, is approximated as an integer linear programming problem. Extensive evaluations on MTL benchmarks, NYUv2 and Tiny-Taxonomy, demonstrate GB-MTA's effectiveness against both standard and adversarially trained multi-task models. The results also highlight a trade-off between task accuracy improvement via parameter sharing and increased model vulnerability due to enhanced attack transferability.
Problem

Research questions and friction points this paper is trying to address.

Assessing multi-task model robustness to single-task adversarial attacks
Designing attacks to simultaneously target all tasks in multi-task models
Investigating how parameter sharing affects adversarial robustness in multi-task learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Gradient Balancing Attack framework for multi-task models
Optimization via integer linear programming to attack all tasks
Analyzes trade-off between parameter sharing accuracy and robustness
🔎 Similar Papers
No similar papers found.