🤖 AI Summary
To address weak downstream-task generalization, poor performance under few-shot conditions, and high computational complexity in wireless communication-and-sensing systems, this paper introduces LWM—the first foundational model for wireless channels. Built upon the Transformer architecture, LWM employs self-supervised pretraining on large-scale channel data to learn task-agnostic, context-aware universal channel embeddings. Its core contribution is establishing the foundational model paradigm for wireless channels, enabling data-efficient and transferable channel representation learning. Experiments demonstrate that LWM significantly outperforms raw channel representations across diverse communication and sensing downstream tasks—particularly under limited training data or high model complexity—thereby providing a scalable, reusable representational foundation for intelligent wireless systems.
📝 Abstract
This paper presents Large Wireless Model (LWM) -- the world's first foundation model for wireless channels. Designed as a task-agnostic model, LWM generates universal, rich, contextualized channel embeddings (features) that potentially enhance performance across a wide range of downstream tasks in wireless communication and sensing systems. Towards this objective, LWM, which has a transformer-based architecture, was pre-trained in a self-supervised manner on large-scale wireless channel datasets. Our results show consistent improvements in downstream tasks when using the LWM embeddings compared to raw channel representations, especially in scenarios with high-complexity machine learning tasks and limited training datasets. This LWM's ability to learn from large-scale wireless data opens a promising direction for intelligent systems that can efficiently adapt to diverse tasks with limited data, paving the way for addressing key challenges in wireless communication and sensing systems.