๐ค AI Summary
In decentralized federated learning (DFL), elected aggregators may turn malicious post-election, yet existing mechanisms lack dynamic, trustworthy auditing. Method: This paper proposes the first trusted aggregator auditing framework for DFL, featuring a novel two-phase trust mechanism: (i) pre-election scoring based on multi-dimensional node attributes to assess initial trustworthiness; and (ii) post-aggregation real-time anomaly detection using the HilbertโSchmidt Independence Criterion (HSIC), integrated with blockchain-based evidence anchoring and concept drift analysis to ensure verifiability and traceability of audit outcomes. Contribution/Results: Extensive experiments across multiple datasets and varying Byzantine attacker scales demonstrate that the framework reduces accuracy degradation caused by malicious aggregation by 42%, achieves an audit false positive rate below 3.1%, and significantly enhances model robustness and system trustworthiness.
๐ Abstract
The server-less nature of Decentralized Federated Learning (DFL) requires allocating the aggregation role to specific participants in each federated round. Current DFL architectures ensure the trustworthiness of the aggregator node upon selection. However, most of these studies overlook the possibility that the aggregating node may turn rogue and act maliciously after being nominated. To address this problem, this paper proposes a DFL structure, called TrustChain, that scores the aggregators before selection based on their past behavior and additionally audits them after the aggregation. To do this, the statistical independence between the client updates and the aggregated model is continuously monitored using the Hilbert-Schmidt Independence Criterion (HSIC). The proposed method relies on several principles, including blockchain, anomaly detection, and concept drift analysis. The designed structure is evaluated on several federated datasets and attack scenarios with different numbers of Byzantine nodes.