🤖 AI Summary
To address low accuracy and temporal incoherence in weakly supervised semantic segmentation for industrial waste sorting videos, this paper proposes a pixel-level annotation-free weakly supervised video semantic segmentation method. The core contribution lies in explicitly incorporating temporal consistency into the weakly supervised classifier’s training objective: (1) optical-flow-assisted motion compensation aligns class activation maps (CAMs) across adjacent frames; (2) human-in-the-loop videos—capturing pre- and post-manipulation scenes—are leveraged to construct temporally consistent pseudo-labels; and (3) a temporal contrastive loss is introduced to enhance CAM stability over time. Evaluated on a real-world waste sorting video dataset, the method achieves a 12.6% improvement in mean Intersection-over-Union (mIoU) over baseline approaches, demonstrating both the effectiveness and practicality of integrating explicit temporal constraints during training.
📝 Abstract
In industrial settings, weakly supervised (WS) methods are usually preferred over their fully supervised (FS) counterparts as they do not require costly manual annotations. Unfortunately, the segmentation masks obtained in the WS regime are typically poor in terms of accuracy. In this work, we present a WS method capable of producing accurate masks for semantic segmentation in the case of video streams. More specifically, we build saliency maps that exploit the temporal coherence between consecutive frames in a video, promoting consistency when objects appear in different frames. We apply our method in a waste-sorting scenario, where we perform weakly supervised video segmentation (WSVS) by training an auxiliary classifier that distinguishes between videos recorded before and after a human operator, who manually removes specific wastes from a conveyor belt. The saliency maps of this classifier identify materials to be removed, and we modify the classifier training to minimize differences between the saliency map of a central frame and those in adjacent frames, after having compensated object displacement. Experiments on a real-world dataset demonstrate the benefits of integrating temporal coherence directly during the training phase of the classifier. Code and dataset are available upon request.