🤖 AI Summary
Current wireless foundation models (WFMs) support only single-modal inputs, limiting their adaptability to dynamic optimal modalities under varying tasks and channel conditions. To address this, we propose the first multimodal WFMs capable of jointly processing raw IQ streams and image-like wireless modalities—including spectrograms and channel state information (CSI)—within a unified self-supervised pretraining framework. Our key innovation is a multimodal masked wireless modeling approach that enables joint representation learning across heterogeneous signal types while maintaining compatibility with diverse downstream tasks. Through deep fusion and co-optimization of IQ sequences and image-like data, the model achieves state-of-the-art or superior performance on five benchmark tasks: human activity sensing, RF fingerprinting, 5G-based localization, device identification, and interference detection—outperforming unimodal baselines across all tasks. This demonstrates the broad, task-agnostic gains of multimodal representation learning for wireless intelligence.
📝 Abstract
Wireless foundation models (WFMs) have recently demonstrated promising capabilities, jointly performing multiple wireless functions and adapting effectively to new environments. However, while current WFMs process only one modality, depending on the task and operating conditions, the most informative modality changes and no single modality is best for all tasks. WFMs should therefore be designed to accept multiple modalities to enable a broader and more diverse range of tasks and scenarios. In this work, we propose and build the first multimodal wireless foundation model capable of processing both raw IQ streams and image-like wireless modalities (e.g., spectrograms and CSI) and performing multiple tasks across both. We introduce masked wireless modeling for the multimodal setting, a self-supervised objective and pretraining recipe that learns a joint representation from IQ streams and image-like wireless modalities. We evaluate the model on five tasks across both modality families: image-based (human activity sensing, RF signal classification, 5G NR positioning) and IQ-based (RF device fingerprinting, interference detection/classification). The multimodal WFM is competitive with single-modality WFMs, and in several cases surpasses their performance. Our results demonstrates the strong potential of developing multimodal WFMs that support diverse wireless tasks across different modalities. We believe this provides a concrete step toward both AI-native 6G and the vision of joint sensing, communication, and localization.