🤖 AI Summary
Federated learning (FL) avoids raw data sharing yet remains vulnerable to diverse privacy threats; however, existing research lacks a systematic, fine-grained taxonomy of threats across horizontal, vertical, and transfer FL paradigms. Method: We conduct a systematic literature review integrating threat modeling, attack provenance analysis, and defense mechanism mapping to identify both commonalities and paradigm-specific risks. Contribution/Results: We propose the first unified privacy threat classification framework covering all three FL paradigms, explicitly characterizing differences in attack surfaces and principles for defense adaptation. The study yields a structured “threat–countermeasure” knowledge graph, enabling rigorous privacy risk assessment, robust algorithm design, and principled selection of defense strategies in FL systems. This framework provides both theoretical foundations and practical guidance for enhancing FL privacy security.
📝 Abstract
Federated learning is widely considered to be as a privacy-aware learning method because no training data is exchanged directly between clients. Nevertheless, there are threats to privacy in federated learning, and privacy countermeasures have been studied. However, we note that common and unique privacy threats among typical types of federated learning have not been categorized and described in a comprehensive and specific way. In this paper, we describe privacy threats and countermeasures for the typical types of federated learning; horizontal federated learning, vertical federated learning, and transfer federated learning.