🤖 AI Summary
Existing fisheye image distortion modeling is inadequate for pixel-level dense prediction tasks, and conventional architectures fail to generalize across varying distortion levels and lens types. Method: We propose a distortion-aware encoder-decoder architecture featuring: (1) a novel radial Transformer that integrates physics-driven radial distortion modeling; (2) a distortion-adaptive token sampling strategy to mitigate feature sparsity at image boundaries; and (3) a modified U-Net backbone enabling distortion-aware feature propagation. Contribution/Results: The model achieves zero-shot generalization to unseen distortion magnitudes and lens configurations without fine-tuning. Evaluated on depth estimation across diverse distortion levels—from extremely low to extremely high—and under out-of-distribution settings, it attains state-of-the-art performance, significantly outperforming mainstream baselines. This demonstrates superior generalizability and practical applicability for fisheye vision tasks.
📝 Abstract
Wide-angle fisheye images are becoming increasingly common for perception tasks in applications such as robotics, security, and mobility (e.g. drones, avionics). However, current models often either ignore the distortions in wide-angle images or are not suitable to perform pixel-level tasks. In this paper, we present an encoder-decoder model based on a radial transformer architecture that adapts to distortions in wide-angle lenses by leveraging the physical characteristics defined by the radial distortion profile. In contrast to the original model, which only performs classification tasks, we introduce a U-Net architecture, DarSwin-Unet, designed for pixel level tasks. Furthermore, we propose a novel strategy that minimizes sparsity when sampling the image for creating its input tokens. Our approach enhances the model capability to handle pixel-level tasks in wide-angle fisheye images, making it more effective for real-world applications. Compared to other baselines, DarSwin-Unet achieves the best results across different datasets, with significant gains when trained on bounded levels of distortions (very low, low, medium, and high) and tested on all, including out-of-distribution distortions. We demonstrate its performance on depth estimation and show through extensive experiments that DarSwin-Unet can perform zero-shot adaptation to unseen distortions of different wide-angle lenses.