🤖 AI Summary
This work addresses the weak cross-scene generalization and scene-specific training requirements in wireless signal propagation modeling. We propose a universal 3D Wireless Radiance Field method. Our approach features: (1) a geometry-aware Transformer encoder that explicitly encodes spatial geometric relationships between transmitters and receivers, enabling scene-agnostic positional encoding; and (2) a hybrid architecture integrating neural radiance fields with neural-driven ray tracing to jointly predict received signal strength end-to-end, implicitly learning multipath propagation physics. Trained on only a single scene, the model generalizes effectively to unseen environments. On standard benchmarks, it reduces RMS error by 32% on unseen scenes compared to prior methods—outperforming both NeRF-based and conventional ray-tracing baselines. The method achieves high accuracy, strong cross-scene generalization, and low deployment overhead.
📝 Abstract
We present Generalizable Wireless Radiance Fields (GWRF), a framework for modeling wireless signal propagation at arbitrary 3D transmitter and receiver positions. Unlike previous methods that adapt vanilla Neural Radiance Fields (NeRF) from the optical to the wireless signal domain, requiring extensive per-scene training, GWRF generalizes effectively across scenes. First, a geometry-aware Transformer encoder-based wireless scene representation module incorporates information from geographically proximate transmitters to learn a generalizable wireless radiance field. Second, a neural-driven ray tracing algorithm operates on this field to automatically compute signal reception at the receiver. Experimental results demonstrate that GWRF outperforms existing methods on single scenes and achieves state-of-the-art performance on unseen scenes.