Foundation Models in Federated Learning: Assessing Backdoor Vulnerabilities

📅 2024-01-18
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel backdoor attack paradigm arising from integrating foundation models (FMs) into federated learning (FL): an adversary—without participating in FL training—can poison synthetically generated data from an FM to induce one-time, externalized corruption of the global model. This threat violates the conventional FL security assumption requiring adversarial client participation and constitutes the first systematic formalization of security risks in FM-FL coupled systems. Method: We evaluate the attack across multimodal tasks—image classification (CIFAR-10, FEMNIST) and text classification (AG News)—and assess resilience against state-of-the-art FL defenses. Contribution/Results: The attack achieves >92% success rate across all benchmarks; existing FL defense mechanisms fail on average in >85% of cases. Our findings underscore the urgent need to redesign FL security architectures for the FM era and establish critical risk awareness and an evaluation benchmark for trustworthy federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL), a privacy-preserving machine learning framework, faces significant data-related challenges. For example, the lack of suitable public datasets leads to ineffective information exchange, especially in heterogeneous environments with uneven data distribution. Foundation Models (FMs) offer a promising solution by generating synthetic datasets that mimic client data distributions, aiding model initialization and knowledge sharing among clients. However, the interaction between FMs and FL introduces new attack vectors that remain largely unexplored. This work therefore assesses the backdoor vulnerabilities exploiting FMs, where attackers exploit safety issues in FMs and poison synthetic datasets to compromise the entire system. Unlike traditional attacks, these new threats are characterized by their one-time, external nature, requiring minimal involvement in FL training. Given these uniqueness, current FL defense strategies provide limited robustness against this novel attack approach. Extensive experiments across image and text domains reveal the high susceptibility of FL to these novel threats, emphasizing the urgent need for enhanced security measures in FL in the era of FMs.
Problem

Research questions and friction points this paper is trying to address.

Assessing backdoor vulnerabilities in Federated Learning using Foundation Models
Exploring new attack vectors from FM-FL interaction compromising system security
Evaluating limited robustness of current FL defenses against FM-based attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Foundation Models to generate synthetic datasets
Assessing backdoor vulnerabilities in Federated Learning
Exploring one-time external attacks on FL systems
🔎 Similar Papers
No similar papers found.
X
Xi Li
the Pennsylvania State University
C
Chen Wu
the Pennsylvania State University
J
Jiaqi Wang
the Pennsylvania State University