SCU: An Efficient Machine Unlearning Scheme for Deep Learning Enabled Semantic Communications

📅 2025-02-27
🏛️ IEEE Transactions on Information Forensics and Security
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of user data removal in deep learning-based semantic communication, this paper proposes the first machine unlearning framework tailored for jointly trained unsupervised encoders and decoders. Our method introduces (1) a mutual information minimization mechanism to enable targeted forgetting at the semantic representation level, and (2) a contrastive compensation strategy that restores model utility without accessing original training data. Crucially, the approach avoids full retraining or data reconstruction, substantially reducing deployment overhead. Extensive experiments on three benchmark datasets demonstrate that, after unlearning: residual influence remains below 1%, PSNR and SSIM degradation is under 0.8%, and inference latency drops by 62%. The framework thus achieves a favorable trade-off among security (effective forgetting), utility (preserved reconstruction quality), and efficiency (low computational cost).

Technology Category

Application Category

📝 Abstract
Deep learning (DL) enabled semantic communications leverage DL to train encoders and decoders (codecs) to extract and recover semantic information. However, most semantic training datasets contain personal private information. Such concerns call for enormous requirements for specified data erasure from semantic codecs when previous users hope to move their data from the semantic system. Existing machine unlearning solutions remove data contribution from trained models, yet usually in supervised sole model scenarios. These methods are infeasible in semantic communications that often need to jointly train unsupervised encoders and decoders. In this paper, we investigate the unlearning problem in DL-enabled semantic communications and propose a semantic communication unlearning (SCU) scheme to tackle the problem. SCU includes two key components. Firstly, we customize the joint unlearning method for semantic codecs, including the encoder and decoder, by minimizing mutual information between the learned semantic representation and the erased samples. Secondly, to compensate for semantic model utility degradation caused by unlearning, we propose a contrastive compensation method, which considers the erased data as the negative samples and the remaining data as the positive samples to retrain the unlearned semantic models contrastively. Theoretical analysis and extensive experimental results on three representative datasets demonstrate the effectiveness and efficiency of our proposed methods.
Problem

Research questions and friction points this paper is trying to address.

Machine unlearning in semantic communications
Joint unlearning for encoders and decoders
Compensating semantic model utility degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint unlearning for semantic codecs
Contrastive compensation method
Minimizing mutual information effectively
🔎 Similar Papers
No similar papers found.
W
Weiqi Wang
School of Computer Science, University of Technology Sydney, Australia
Z
Zhiyi Tian
School of Computer Science, University of Technology Sydney, Australia
Chenhan Zhang
Chenhan Zhang
PhD
deep Learningprivacy-preserving
S
Shui Yu
School of Computer Science, University of Technology Sydney, Australia