Federated Self-Supervised Learning for Automatic Modulation Classification under Non-IID and Class-Imbalanced Data

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address automatic modulation classification (AMC) under non-independent and identically distributed (Non-IID) data and class-imbalanced settings, this paper proposes FedSSL-AMC, a federated self-supervised learning framework. Each client performs self-supervised representation learning on unlabeled I/Q sequences locally, requiring only minimal labeled data for downstream classification. The framework introduces a causal dilated CNN and triplet loss for federated pretraining; we theoretically establish representation convergence and classification separability under noise. Lightweight inference is achieved via local SVMs. Evaluated on synthetic and real-world wireless datasets, FedSSL-AMC significantly outperforms supervised federated baselines across varying SNR, carrier frequency offsets, and Non-IID label distributions. It simultaneously ensures privacy preservation, communication efficiency, and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Training automatic modulation classification (AMC) models on centrally aggregated data raises privacy concerns, incurs communication overhead, and often fails to confer robustness to channel shifts. Federated learning (FL) avoids central aggregation by training on distributed clients but remains sensitive to class imbalance, non-IID client distributions, and limited labeled samples. We propose FedSSL-AMC, which trains a causal, time-dilated CNN with triplet-loss self-supervision on unlabeled I/Q sequences across clients, followed by per-client SVMs on small labeled sets. We establish convergence of the federated representation learning procedure and a separability guarantee for the downstream classifier under feature noise. Experiments on synthetic and over-the-air datasets show consistent gains over supervised FL baselines under heterogeneous SNR, carrier-frequency offsets, and non-IID label partitions.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy concerns in centralized AMC training
Mitigates class imbalance and non-IID data in federated learning
Reduces dependency on labeled samples through self-supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated self-supervised learning with triplet-loss CNN
Client-specific SVM classifiers on small labeled data
Convergence guarantees under non-IID and noisy conditions
🔎 Similar Papers
No similar papers found.
U
Usman Akram
Department of Electrical and Computer Engineering, University of Texas at Austin
Y
Yiyue Chen
Department of Electrical and Computer Engineering, University of Texas at Austin
Haris Vikalo
Haris Vikalo
Professor, University of Texas at Austin