🤖 AI Summary
Edge detection (ED) faces two key challenges: severe annotation noise in human-labeled ground truth severely limits model performance, and existing methods struggle to simultaneously achieve high accuracy and texture robustness under stringent error tolerances. To address these, we propose the Cascaded Skip Density Block (CSDB) architecture and a noise-agnostic training paradigm. CSDB enhances texture preservation through multi-scale density modeling, while our training strategy enables direct supervision using clean edge maps—rather than noisy manual annotations—for the first time, augmented by noise-suppressing data augmentation. Evaluated on standard benchmarks including BSDS500 and NYUDv2, our method achieves state-of-the-art performance, with significant improvements in average precision (AP). The source code is publicly released to ensure reproducibility and validate effectiveness.
📝 Abstract
Image edge detection (ED) is a fundamental task in computer vision. While convolution-based models have significantly advanced ED performance, achieving high precision under strict error tolerance constraints remains challenging. Furthermore, the reliance on noisy, human-annotated training data limits model performance, even when the inputs are edge maps themselves. In this paper, we address these challenges in two key aspects. First, we propose a novel ED model incorporating Cascaded Skipping Density Blocks (CSDB) to enhance precision and robustness. Our model achieves state-of-the-art (SOTA) performance across multiple datasets, with substantial improvements in average precision (AP), as demonstrated by extensive experiments. Second, we introduce a novel data augmentation strategy that enables the integration of noiseless annotations during training, improving model performance, particularly when processing edge maps directly. Our findings contribute to a more precise ED architecture and the first method for integrating noiseless training data into ED tasks, offering potential directions for improving ED models. Codes can be found on https://github.com/Hao-B-Shu/SDPED.