🤖 AI Summary
Low-altitude network coverage (LANC) prediction faces two key challenges: unavailability of base station antenna beam parameters and sparse low-altitude drive-test data, leading to imbalanced feature sampling and poor model generalization. To address these, we propose a prediction framework integrating domain expertise with disentangled representation learning. First, communication priors guide feature compression of high-dimensional operational parameters, reducing modeling complexity. Second, a disentangled representation learning mechanism is designed, jointly propagating a physics-informed propagation model and dedicated subnetworks to separately extract and aggregate semantically distinct features—namely, location, beam, and channel characteristics. This design significantly enhances generalization under limited-sample conditions. Experiments demonstrate a 7% reduction in prediction error over the best baseline; real-world deployment validation yields a mean absolute error (MAE) of 5 dB, confirming practical feasibility for engineering implementation.
📝 Abstract
The expansion of the low-altitude economy has underscored the significance of Low-Altitude Network Coverage (LANC) prediction for designing aerial corridors. While accurate LANC forecasting hinges on the antenna beam patterns of Base Stations (BSs), these patterns are typically proprietary and not readily accessible. Operational parameters of BSs, which inherently contain beam information, offer an opportunity for data-driven low-altitude coverage prediction. However, collecting extensive low-altitude road test data is cost-prohibitive, often yielding only sparse samples per BS. This scarcity results in two primary challenges: imbalanced feature sampling due to limited variability in high-dimensional operational parameters against the backdrop of substantial changes in low-dimensional sampling locations, and diminished generalizability stemming from insufficient data samples. To overcome these obstacles, we introduce a dual strategy comprising expert knowledge-based feature compression and disentangled representation learning. The former reduces feature space complexity by leveraging communications expertise, while the latter enhances model generalizability through the integration of propagation models and distinct subnetworks that capture and aggregate the semantic representations of latent features. Experimental evaluation confirms the efficacy of our framework, yielding a 7% reduction in error compared to the best baseline algorithm. Real-network validations further attest to its reliability, achieving practical prediction accuracy with MAE errors at the 5dB level.