🤖 AI Summary
This paper addresses semantic segmentation under weak supervision using only noisy, coarse-grained image-level positive/negative labels—without any pixel-level annotations. To mitigate label imprecision and noise inherent in such coarse supervision, we propose a dual-coupled CNN framework that jointly models the underlying true label distribution and the noise generation process. Specifically, it integrates complementary label estimation with noise-aware learning to explicitly disentangle and rectify negative-class distribution bias. The method operates entirely without pixel-level supervision and is validated across diverse domains—including MNIST, Cityscapes, and retinal medical images. Under low-ratio coarse-label settings, it significantly outperforms existing weakly supervised approaches, achieving improvements of 3.2–5.7 mIoU in segmentation accuracy. These results demonstrate its robustness to label noise and strong cross-domain generalization capability.
📝 Abstract
Large annotated datasets are vital for training segmentation models, but pixel-level labeling is time-consuming, error-prone, and often requires scarce expert annotators, especially in medical imaging. In contrast, coarse annotations are quicker, cheaper, and easier to produce, even by non-experts. In this paper, we propose to use coarse drawings from both positive (target) and negative (background) classes in the image, even with noisy pixels, to train a convolutional neural network (CNN) for semantic segmentation. We present a method for learning the true segmentation label distributions from purely noisy coarse annotations using two coupled CNNs. The separation of the two CNNs is achieved by high fidelity with the characters of the noisy training annotations. We propose to add a complementary label learning that encourages estimating negative label distribution. To illustrate the properties of our method, we first use a toy segmentation dataset based on MNIST. We then present the quantitative results of experiments using publicly available datasets: Cityscapes dataset for multi-class segmentation, and retinal images for medical applications. In all experiments, our method outperforms state-of-the-art methods, particularly in the cases where the ratio of coarse annotations is small compared to the given dense annotations.