🤖 AI Summary
To address low semantic communication efficiency and poor visual question answering (VQA) accuracy under low signal-to-noise ratio (SNR) conditions in vehicular networks, this paper proposes the first large language model (LLM)-driven semantic communication framework tailored for VQA tasks. Built upon LLaVA, it introduces a task-oriented semantic encoder that innovatively fuses user-attention-guided image patching with objective visual features, and designs a semantic-importance-weighted transmission mechanism for adaptive resource allocation at the semantic level. Compared to conventional communication methods, the framework improves VQA accuracy by 33.1% at 10 dB SNR and 13.4% at 12 dB SNR, significantly enhancing task robustness and spectral efficiency in noisy environments. Key contributions include: (i) the first integration of large multimodal models (LMMs) into vehicular semantic communication; (ii) a novel subjective–objective collaborative paradigm for image patch importance assessment; and (iii) end-to-end semantic-aware transmission optimization.
📝 Abstract
Task-oriented semantic communication has emerged as a fundamental approach for enhancing performance in various communication scenarios. While recent advances in Generative Artificial Intelligence (GenAI), such as Large Language Models (LLMs), have been applied to semantic communication designs, the potential of Large Multimodal Models (LMMs) remains largely unexplored. In this paper, we investigate an LMM-based vehicle AI assistant using a Large Language and Vision Assistant (LLaVA) and propose a task-oriented semantic communication framework to facilitate efficient interaction between users and cloud servers. To reduce computational demands and shorten response time, we optimize LLaVA's image slicing to selectively focus on areas of utmost interest to users. Additionally, we assess the importance of image patches by combining objective and subjective user attention, adjusting energy usage for transmitting semantic information. This strategy optimizes resource utilization, ensuring precise transmission of critical information. We construct a Visual Question Answering (VQA) dataset for traffic scenarios to evaluate effectiveness. Experimental results show that our semantic communication framework significantly increases accuracy in answering questions under the same channel conditions, performing particularly well in environments with poor Signal-to-Noise Ratios (SNR). Accuracy can be improved by 13.4% at an SNR of 12dB and 33.1% at 10dB, respectively.