🤖 AI Summary
To address the dual challenges of limited computational capacity on edge devices and privacy leakage in cloud-based vision processing, this paper proposes a privacy-by-design hierarchical distributed Vision Transformer (ViT) offloading framework. It performs semantic image partitioning at a trusted edge node and distributes non-reconstructible fragments across multiple cloud nodes—no single cloud can recover the original image—while inference and feature fusion remain entirely local. We introduce the first semantic-level non-reconstructible distributed computing paradigm for ViTs and pioneer its application to the Segment Anything Model (SAM). Our approach retains 98.3% of baseline segmentation accuracy while reducing image reconstruction attack success rate to <0.7%. Integrated with differential privacy-enhanced sharding and edge-cloud collaborative scheduling, end-to-end latency remains ≤320 ms, and the system supports dynamic scaling to 16+ heterogeneous cloud nodes.
📝 Abstract
Nowadays, visual intelligence tools have become ubiquitous, offering all kinds of convenience and possibilities. However, these tools have high computational requirements that exceed the capabilities of resource-constrained mobile and wearable devices. While offloading visual data to the cloud is a common solution, it introduces significant privacy vulnerabilities during transmission and server-side computation. To address this, we propose a novel distributed, hierarchical offloading framework for Vision Transformers (ViTs) that addresses these privacy challenges by design. Our approach uses a local trusted edge device, such as a mobile phone or an Nvidia Jetson, as the edge orchestrator. This orchestrator partitions the user's visual data into smaller portions and distributes them across multiple independent cloud servers. By design, no single external server possesses the complete image, preventing comprehensive data reconstruction. The final data merging and aggregation computation occurs exclusively on the user's trusted edge device. We apply our framework to the Segment Anything Model (SAM) as a practical case study, which demonstrates that our method substantially enhances content privacy over traditional cloud-based approaches. Evaluations show our framework maintains near-baseline segmentation performance while substantially reducing the risk of content reconstruction and user data exposure. Our framework provides a scalable, privacy-preserving solution for vision tasks in the edge-cloud continuum.