🤖 AI Summary
Depth estimation from defocused images under photon-limited conditions suffers from poor robustness—conventional blur-based depth-from-defocus (DfD) methods are highly sensitive to noise and fail to accurately model boundary-specific blur characteristics. Method: We propose Blurry-Edges, a low-level image patch descriptor that explicitly encodes boundary location, color, and local smoothness. Leveraging dual defocused inputs, we design an end-to-end deep network to predict this representation and integrate it with a newly derived closed-form depth–defocus relationship for noise-robust depth reconstruction. Contribution/Results: This is the first boundary-driven DfD framework with theoretical interpretability. Extensive evaluation on both synthetic and real photon-limited datasets demonstrates that our method consistently outperforms existing DfD approaches in depth accuracy, significantly enhancing robustness and precision under low signal-to-noise ratio conditions.
📝 Abstract
Extracting depth information from photon-limited, defocused images is challenging because depth from defocus (DfD) relies on accurate estimation of defocus blur, which is fundamentally sensitive to image noise. We present a novel approach to robustly measure object depths from photon-limited images along the defocused boundaries. It is based on a new image patch representation, Blurry-Edges, that explicitly stores and visualizes a rich set of low-level patch information, including boundaries, color, and smoothness. We develop a deep neural network architecture that predicts the Blurry-Edges representation from a pair of differently defocused images, from which depth can be calculated using a closed-form DfD relation we derive. The experimental results on synthetic and real data show that our method achieves the highest depth estimation accuracy on photon-limited images compared to a broad range of state-of-the-art DfD methods.