Privacy-Preserving Computer Vision for Industry: Three Case Studies in Human-Centric Manufacturing

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In AI-powered computer vision applications for manufacturing, an inherent tension exists between worker privacy protection and production utility—conventional global blurring degrades task performance while insufficient anonymization risks PII leakage. Method: We propose a task-driven, learnable visual transformation framework that enables on-demand masking of sensitive regions rather than global obfuscation. Our approach employs a lightweight deep feature disentanglement network, integrated with multimodal privacy metrics, edge-cloud collaborative deployment, and human-centered feedback evaluation. Contribution/Results: We conduct the first cross-scenario, quantitative privacy–utility trade-off analysis and empirical validation across three real-world industrial settings: woodworking surveillance, AGV human-robot collaborative navigation, and multi-view ergonomics risk assessment. Experiments show ≥92% task accuracy retention and 87% reduction in PII leakage risk. The system achieves industrial-grade plug-and-play deployment and yields a transferable Responsible AI implementation guideline.

Technology Category

Application Category

📝 Abstract
The adoption of AI-powered computer vision in industry is often constrained by the need to balance operational utility with worker privacy. Building on our previously proposed privacy-preserving framework, this paper presents its first comprehensive validation on real-world data collected directly by industrial partners in active production environments. We evaluate the framework across three representative use cases: woodworking production monitoring, human-aware AGV navigation, and multi-camera ergonomic risk assessment. The approach employs learned visual transformations that obscure sensitive or task-irrelevant information while retaining features essential for task performance. Through both quantitative evaluation of the privacy-utility trade-off and qualitative feedback from industrial partners, we assess the framework's effectiveness, deployment feasibility, and trust implications. Results demonstrate that task-specific obfuscation enables effective monitoring with reduced privacy risks, establishing the framework's readiness for real-world adoption and providing cross-domain recommendations for responsible, human-centric AI deployment in industry.
Problem

Research questions and friction points this paper is trying to address.

Balancing operational utility with worker privacy in AI-powered computer vision
Validating a privacy-preserving framework on real-world industrial data
Assessing effectiveness and feasibility through three manufacturing use cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned visual transformations obscure sensitive information
Task-specific obfuscation balances privacy and utility
Framework validated in three real-world industrial use cases
🔎 Similar Papers
No similar papers found.
S
Sander De Coninck
IDLab, Department of Information Technology at Ghent University – imec, Technologiepark 126, B-9052 Ghent, Belgium
E
Emilio Gamba
Flanders Make, corelab ProductionS, Oude Diestersebaan 133, 3920 Lommel, Belgium
B
Bart Van Doninck
Flanders Make, corelab ProductionS, Oude Diestersebaan 133, 3920 Lommel, Belgium
A
Abdellatif Bey-Temsamani
Flanders Make, corelab ProductionS, Oude Diestersebaan 133, 3920 Lommel, Belgium
Sam Leroux
Sam Leroux
Assistant professor, Ghent University - imec
Resource efficient deep learningmachine learning on the edgedistributed machine learningTinyML
Pieter Simoens
Pieter Simoens
imec - Ghent University
cloud roboticsInternet of Robotic Thingsdeep learningedge computingswarm/collective intelligence