🤖 AI Summary
Target-oriented semantic communication for resource-constrained edge devices demands efficient, robust, task-aware transmission under dynamic bandwidth and adverse channel conditions. To address this, we propose a dynamically configurable Transformer-driven deep joint source-channel coding (DJSCC) framework. Our approach introduces two key innovations: (i) a semantic token adaptive selection mechanism that identifies task-critical tokens, and (ii) a Lyapunov-based stochastic optimization framework enabling dual-dimensional resource control—jointly optimizing token count and embedding dimension. By performing end-to-end semantic modeling and compression, the framework significantly enhances downstream task performance. Specifically, in low-bandwidth (≤0.1 bits per pixel) and high-noise (SNR ≤ 5 dB) regimes, it achieves an average 12.6% improvement in mean Average Precision (mAP) over state-of-the-art methods for object detection, while reducing inference latency by 37%.
📝 Abstract
This paper presents an adaptive framework for edge inference based on a dynamically configurable transformer-powered deep joint source channel coding (DJSCC) architecture. Motivated by a practical scenario where a resource constrained edge device engages in goal oriented semantic communication, such as selectively transmitting essential features for object detection to an edge server, our approach enables efficient task aware data transmission under varying bandwidth and channel conditions. To achieve this, input data is tokenized into compact high level semantic representations, refined by a transformer, and transmitted over noisy wireless channels. As part of the DJSCC pipeline, we employ a semantic token selection mechanism that adaptively compresses informative features into a user specified number of tokens per sample. These tokens are then further compressed through the JSCC module, enabling a flexible token communication strategy that adjusts both the number of transmitted tokens and their embedding dimensions. We incorporate a resource allocation algorithm based on Lyapunov stochastic optimization to enhance robustness under dynamic network conditions, effectively balancing compression efficiency and task performance. Experimental results demonstrate that our system consistently outperforms existing baselines, highlighting its potential as a strong foundation for AI native semantic communication in edge intelligence applications.