HealSplit: Towards Self-Healing through Adversarial Distillation in Split Federated Learning

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Split Federated Learning (SFL) is vulnerable to diverse data poisoning attacks targeting local features, labels, and model weights; existing defenses—adapted from conventional federated learning—lack robustness due to their inability to access complete model updates. To address this, we propose the first end-to-end self-healing defense framework for SFL. Our method introduces a novel topology-aware anomaly detection mechanism that quantifies local anomalies on the client interaction graph; integrates adversarial multi-teacher knowledge distillation with a generative recovery pipeline to achieve semantically consistent sample repair; and employs a gradient-topology interaction matrix alignment strategy to strengthen consistency verification. Extensive experiments across four benchmark datasets demonstrate that our approach significantly outperforms ten state-of-the-art defense methods in robustness, effectively mitigating a broad spectrum of data poisoning attacks under diverse SFL settings.

Technology Category

Application Category

📝 Abstract
Split Federated Learning (SFL) is an emerging paradigm for privacy-preserving distributed learning. However, it remains vulnerable to sophisticated data poisoning attacks targeting local features, labels, smashed data, and model weights. Existing defenses, primarily adapted from traditional Federated Learning (FL), are less effective under SFL due to limited access to complete model updates. This paper presents HealSplit, the first unified defense framework tailored for SFL, offering end-to-end detection and recovery against five sophisticated types of poisoning attacks. HealSplit comprises three key components: (1) a topology-aware detection module that constructs graphs over smashed data to identify poisoned samples via topological anomaly scoring (TAS); (2) a generative recovery pipeline that synthesizes semantically consistent substitutes for detected anomalies, validated by a consistency validation student; and (3) an adversarial multi-teacher distillation framework trains the student using semantic supervision from a Vanilla Teacher and anomaly-aware signals from an Anomaly-Influence Debiasing (AD) Teacher, guided by the alignment between topological and gradient-based interaction matrices. Extensive experiments on four benchmark datasets demonstrate that HealSplit consistently outperforms ten state-of-the-art defenses, achieving superior robustness and defense effectiveness across diverse attack scenarios.
Problem

Research questions and friction points this paper is trying to address.

Defending split federated learning against sophisticated data poisoning attacks
Detecting poisoned samples through topological anomaly scoring on smashed data
Recovering from attacks via generative synthesis and adversarial teacher distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial distillation framework for self-healing
Topology-aware detection via graph anomaly scoring
Generative recovery with semantic consistency validation
🔎 Similar Papers
No similar papers found.