A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Model extraction attacks (MEAs) threaten model intellectual property and data privacy in distributed environments—including cloud, edge, and federated learning—yet existing research lacks a cross-paradigm unified analytical framework, resulting in fragmented defenses and inaccurate risk assessment. Method: We propose the first comprehensive MEA analysis framework covering cloud-edge-end collaborative scenarios, integrating systematic literature review, threat modeling, attack-defense game analysis, and empirical validation across autonomous driving, healthcare, and finance domains. Contribution/Results: We identify 12 attack patterns and 9 defense mechanisms, establishing their precise mapping relationships. We introduce the novel “context-aware defense” principle, revealing how environmental characteristics fundamentally shape attack surface evolution and defense requirements. Critically, we expose a pervasive flaw in current evaluations: the neglect of realistic distributed constraints. Our work provides both theoretical foundations and practical guidelines for building adaptive, quantifiable security assessment and protection frameworks for distributed machine learning.

Technology Category

Application Category

📝 Abstract
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data. With the increasing deployment of machine learning models in distributed computing environments, including cloud, edge, and federated learning settings, each paradigm introduces distinct vulnerabilities and challenges. Without a unified perspective on MEAs across these distributed environments, organizations risk fragmented defenses, inadequate risk assessments, and substantial economic and privacy losses. This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements. We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services. By synthesizing recent advances in MEAs research and discussing the limitations of current evaluation practices, this survey provides essential insights for developing robust and adaptive defense strategies. Our comprehensive approach highlights the importance of integrating protective measures across the entire distributed computing landscape to ensure the secure deployment of machine learning models.
Problem

Research questions and friction points this paper is trying to address.

Address vulnerabilities in distributed machine learning environments.
Examine MEAs across cloud, edge, federated learning settings.
Develop robust defenses against intellectual property theft.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examines attack evolution in distributed environments
Highlights integrated protective measures importance
Provides insights for adaptive defense strategies
🔎 Similar Papers
No similar papers found.