🤖 AI Summary
Addressing the high annotation cost and stringent accuracy requirements in industrial defect segmentation, this paper proposes a bounding-box-guided diffusion model synthesis framework. We innovatively introduce an enhanced bounding-box representation as conditional input to DDPM, enabling geometry-aware encoding, layout-appearance disentangled control, and multi-scale feature alignment—significantly improving defect localization accuracy and cross-sample consistency. Two novel quantitative metrics are proposed to evaluate synthetic image quality. Experiments demonstrate that combining only 10% real annotations with synthetic data achieves 96.3% of the full-supervision segmentation performance baseline; synthetic images yield a 27% reduction in FID score, while segmentation masks achieve a 14.8% improvement in IoU. This work establishes a new paradigm for low-supervision industrial vision: high-fidelity, pixel-precise data generation.
📝 Abstract
Synthetic dataset generation in Computer Vision, particularly for industrial applications, is still underexplored. Industrial defect segmentation, for instance, requires highly accurate labels, yet acquiring such data is costly and time-consuming. To address this challenge, we propose a novel diffusion-based pipeline for generating high-fidelity industrial datasets with minimal supervision. Our approach conditions the diffusion model on enriched bounding box representations to produce precise segmentation masks, ensuring realistic and accurately localized defect synthesis. Compared to existing layout-conditioned generative methods, our approach improves defect consistency and spatial accuracy. We introduce two quantitative metrics to evaluate the effectiveness of our method and assess its impact on a downstream segmentation task trained on real and synthetic data. Our results demonstrate that diffusion-based synthesis can bridge the gap between artificial and real-world industrial data, fostering more reliable and cost-efficient segmentation models. The code is publicly available at https://github.com/covisionlab/diffusion_labeling.