🤖 AI Summary
In laptop refurbishment, the diversity of adhesive labels introduces high uncertainty and risk of device damage during automated detection and removal. Method: This study systematically evaluates six open-source object detection models (YOLO- and DETR-family) on heterogeneous image sources—real-world and synthetic data generated by DALL·E-3 and Stable Diffusion 3—and proposes a novel robustness metric integrating Monte Carlo Dropout-based uncertainty quantification with Dense Adversary Generation for adversarial perturbation. Contribution/Results: It pioneers the integration of large-model-synthesized data with industrial-grade detection validation. Experiments reveal substantial inter-model performance disparities, yielding a reusable model selection guideline and deployment framework. The approach significantly reduces erroneous label removal, thereby enhancing safety and sustainability in electronic waste reduction.
📝 Abstract
Refurbishing laptops extends their lives while contributing to reducing electronic waste, which promotes building a sustainable future. To this end, the Danish Technological Institute (DTI) focuses on the research and development of several applications, including laptop refurbishing. This has several steps, including cleaning, which involves identifying and removing stickers from laptop surfaces. DTI trained six sticker detection models (SDMs) based on open-source object detection models to identify such stickers precisely so these stickers can be removed automatically. However, given the diversity in types of stickers (e.g., shapes, colors, locations), identification of the stickers is highly uncertain, thereby requiring explicit quantification of uncertainty associated with the identified stickers. Such uncertainty quantification can help reduce risks in removing stickers, which, for example, could otherwise result in damaging laptop surfaces. For uncertainty quantification, we adopted the Monte Carlo Dropout method to evaluate the six SDMs from DTI using three datasets: the original image dataset from DTI and two datasets generated with vision language models, i.e., DALL-E-3 and Stable Diffusion-3. In addition, we presented novel robustness metrics concerning detection accuracy and uncertainty to assess the robustness of the SDMs based on adversarial datasets generated from the three datasets using a dense adversary method. Our evaluation results show that different SDMs perform differently regarding different metrics. Based on the results, we provide SDM selection guidelines and lessons learned from various perspectives.