Enabling Trustworthy Federated Learning via Remote Attestation for Mitigating Byzantine Threats

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) is vulnerable to Byzantine attacks, as the central server cannot verify the integrity of clients’ local training processes; moreover, existing data-driven defenses struggle to distinguish malicious model updates from benign discrepancies caused by non-IID data distributions, resulting in high false-positive rates. To address this, we propose Sentinel—the first FL security framework that tightly integrates remote attestation with trusted execution environments (TEEs). Sentinel achieves verifiable training via fine-grained code instrumentation and control-flow monitoring, while enforcing runtime integrity checks on critical variables within the TEE and generating cryptographically signed remote attestation reports. These mechanisms jointly ensure the authenticity and integrity of model updates. Evaluated on resource-constrained IoT devices, Sentinel incurs minimal overhead while substantially reducing false positives and significantly enhancing the reliability and security of global model aggregation.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has gained significant attention for its privacy-preserving capabilities, enabling distributed devices to collaboratively train a global model without sharing raw data. However, its distributed nature forces the central server to blindly trust the local training process and aggregate uncertain model updates, making it susceptible to Byzantine attacks from malicious participants, especially in mission-critical scenarios. Detecting such attacks is challenging due to the diverse knowledge across clients, where variations in model updates may stem from benign factors, such as non-IID data, rather than adversarial behavior. Existing data-driven defenses struggle to distinguish malicious updates from natural variations, leading to high false positive rates and poor filtering performance. To address this challenge, we propose Sentinel, a remote attestation (RA)-based scheme for FL systems that regains client-side transparency and mitigates Byzantine attacks from a system security perspective. Our system employs code instrumentation to track control-flow and monitor critical variables in the local training process. Additionally, we utilize a trusted training recorder within a Trusted Execution Environment (TEE) to generate an attestation report, which is cryptographically signed and securely transmitted to the server. Upon verification, the server ensures that legitimate client training processes remain free from program behavior violation or data manipulation, allowing only trusted model updates to be aggregated into the global model. Experimental results on IoT devices demonstrate that Sentinel ensures the trustworthiness of the local training integrity with low runtime and memory overhead.
Problem

Research questions and friction points this paper is trying to address.

Mitigating Byzantine attacks in Federated Learning systems
Ensuring trustworthiness of local training processes
Distinguishing malicious updates from benign data variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Remote attestation ensures client training integrity
Code instrumentation monitors control-flow and variables
TEE generates cryptographically signed attestation reports
🔎 Similar Papers
No similar papers found.
C
Chaoyu Zhang
Virginia Tech, VA, USA
H
Heng Jin
Virginia Tech, VA, USA
Shanghao Shi
Shanghao Shi
Virginia Tech
Network SecurityMachine Learning SecurityCPS and IoT Security
H
Hexuan Yu
Virginia Tech, VA, USA
S
Sydney Johns
Virginia Tech, VA, USA
Y
Y. Thomas Hou
Virginia Tech, VA, USA
Wenjing Lou
Wenjing Lou
W. C. English Endowed Professor, IEEE Fellow, Virginia Tech, USA
Cyber SecurityWireless NetworksWireless SecurityNetwork SecurityCloud Computing