🤖 AI Summary
In AI-powered computer vision applications for manufacturing, an inherent tension exists between worker privacy protection and production utility—conventional global blurring degrades task performance while insufficient anonymization risks PII leakage.
Method: We propose a task-driven, learnable visual transformation framework that enables on-demand masking of sensitive regions rather than global obfuscation. Our approach employs a lightweight deep feature disentanglement network, integrated with multimodal privacy metrics, edge-cloud collaborative deployment, and human-centered feedback evaluation.
Contribution/Results: We conduct the first cross-scenario, quantitative privacy–utility trade-off analysis and empirical validation across three real-world industrial settings: woodworking surveillance, AGV human-robot collaborative navigation, and multi-view ergonomics risk assessment. Experiments show ≥92% task accuracy retention and 87% reduction in PII leakage risk. The system achieves industrial-grade plug-and-play deployment and yields a transferable Responsible AI implementation guideline.
📝 Abstract
The adoption of AI-powered computer vision in industry is often constrained by the need to balance operational utility with worker privacy. Building on our previously proposed privacy-preserving framework, this paper presents its first comprehensive validation on real-world data collected directly by industrial partners in active production environments. We evaluate the framework across three representative use cases: woodworking production monitoring, human-aware AGV navigation, and multi-camera ergonomic risk assessment. The approach employs learned visual transformations that obscure sensitive or task-irrelevant information while retaining features essential for task performance. Through both quantitative evaluation of the privacy-utility trade-off and qualitative feedback from industrial partners, we assess the framework's effectiveness, deployment feasibility, and trust implications. Results demonstrate that task-specific obfuscation enables effective monitoring with reduced privacy risks, establishing the framework's readiness for real-world adoption and providing cross-domain recommendations for responsible, human-centric AI deployment in industry.