🤖 AI Summary
Deep learning-based channel estimation methods suffer from poor generalization in dynamic wireless environments and struggle to adapt to multi-task and zero-shot scenarios. To address this, we propose MoE-CE: a modular, end-to-end channel estimation framework built upon a Mixture-of-Experts (MoE) architecture. MoE-CE comprises multiple specialized subnetworks—each tailored to distinct signal-to-noise ratios (SNRs), resource block configurations, and channel characteristics—and employs a learnable, lightweight dynamic router for expert selection. Crucially, this design enhances model capacity and cross-scenario adaptability without increasing inference computational overhead. The framework is backbone-agnostic, supports training on synthetic data, and enables plug-and-play deployment. Extensive experiments demonstrate that MoE-CE achieves significantly higher estimation accuracy and robustness than state-of-the-art deep learning methods under both multi-task and zero-shot settings, while maintaining efficient inference.
📝 Abstract
Reliable channel estimation (CE) is fundamental for robust communication in dynamic wireless environments, where models must generalize across varying conditions such as signal-to-noise ratios (SNRs), the number of resource blocks (RBs), and channel profiles. Traditional deep learning (DL)-based methods struggle to generalize effectively across such diverse settings, particularly under multitask and zero-shot scenarios. In this work, we propose MoE-CE, a flexible mixture-of-experts (MoE) framework designed to enhance the generalization capability of DL-based CE methods. MoE-CE provides an appropriate inductive bias by leveraging multiple expert subnetworks, each specialized in distinct channel characteristics, and a learned router that dynamically selects the most relevant experts per input. This architecture enhances model capacity and adaptability without a proportional rise in computational cost while being agnostic to the choice of the backbone model and the learning algorithm. Through extensive experiments on synthetic datasets generated under diverse SNRs, RB numbers, and channel profiles, including multitask and zero-shot evaluations, we demonstrate that MoE-CE consistently outperforms conventional DL approaches, achieving significant performance gains while maintaining efficiency.