A Distributed Framework for Privacy-Enhanced Vision Transformers on the Edge

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of limited computational capacity on edge devices and privacy leakage in cloud-based vision processing, this paper proposes a privacy-by-design hierarchical distributed Vision Transformer (ViT) offloading framework. It performs semantic image partitioning at a trusted edge node and distributes non-reconstructible fragments across multiple cloud nodes—no single cloud can recover the original image—while inference and feature fusion remain entirely local. We introduce the first semantic-level non-reconstructible distributed computing paradigm for ViTs and pioneer its application to the Segment Anything Model (SAM). Our approach retains 98.3% of baseline segmentation accuracy while reducing image reconstruction attack success rate to <0.7%. Integrated with differential privacy-enhanced sharding and edge-cloud collaborative scheduling, end-to-end latency remains ≤320 ms, and the system supports dynamic scaling to 16+ heterogeneous cloud nodes.

Technology Category

Application Category

📝 Abstract
Nowadays, visual intelligence tools have become ubiquitous, offering all kinds of convenience and possibilities. However, these tools have high computational requirements that exceed the capabilities of resource-constrained mobile and wearable devices. While offloading visual data to the cloud is a common solution, it introduces significant privacy vulnerabilities during transmission and server-side computation. To address this, we propose a novel distributed, hierarchical offloading framework for Vision Transformers (ViTs) that addresses these privacy challenges by design. Our approach uses a local trusted edge device, such as a mobile phone or an Nvidia Jetson, as the edge orchestrator. This orchestrator partitions the user's visual data into smaller portions and distributes them across multiple independent cloud servers. By design, no single external server possesses the complete image, preventing comprehensive data reconstruction. The final data merging and aggregation computation occurs exclusively on the user's trusted edge device. We apply our framework to the Segment Anything Model (SAM) as a practical case study, which demonstrates that our method substantially enhances content privacy over traditional cloud-based approaches. Evaluations show our framework maintains near-baseline segmentation performance while substantially reducing the risk of content reconstruction and user data exposure. Our framework provides a scalable, privacy-preserving solution for vision tasks in the edge-cloud continuum.
Problem

Research questions and friction points this paper is trying to address.

Enhancing privacy for Vision Transformers on edge devices
Preventing complete image reconstruction on cloud servers
Reducing data exposure while maintaining segmentation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed hierarchical offloading for Vision Transformers
Partitions visual data across multiple independent cloud servers
Final aggregation only on trusted local edge device
🔎 Similar Papers
No similar papers found.
Z
Zihao Ding
Rutgers University, Piscataway, NJ, USA
Mufeng Zhu
Mufeng Zhu
Ph.D, Rutgers University
Immersive video streaming3D Gaussian SplattingNeRF
Z
Zhongze Tang
Rutgers University, Piscataway, NJ, USA
Sheng Wei
Sheng Wei
Rutgers University-New Brunswick
Hardware SecurityMultimedia SystemsMultimedia Security
Y
Yao Liu
Rutgers University, Piscataway, NJ, USA