Shape and Substance: Dual-Layer Side-Channel Attacks on Local Vision-Language Models

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical privacy vulnerability in locally deployed vision-language models (VLMs) that employ dynamic high-resolution preprocessing techniques such as AnyRes. The authors demonstrate that such mechanisms inadvertently introduce algorithmic side channels, exposing both geometric and semantic attributes of input images. They propose the first two-tier side-channel attack framework, combining OS-level execution timing monitoring with last-level cache (LLC) contention analysis, to simultaneously extract geometric fingerprints and semantic density leakage from the dynamic preprocessing pipeline. Experiments on LLaVA-NeXT and Qwen2-VL show that the approach can accurately infer sensitive user-provided visual context. The study further provides a systematic evaluation of the performance overhead of existing countermeasures and concludes with practical security design recommendations for edge AI deployments.

Technology Category

Application Category

📝 Abstract
On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic High-Resolution preprocessing (e.g., AnyRes) introduces an inherent algorithmic side-channel. Unlike static models, dynamic preprocessing decomposes images into a variable number of patches based on their aspect ratio, creating workload-dependent inputs. We demonstrate a dual-layer attack framework against local VLMs. In Tier 1, an unprivileged attacker can exploit significant execution-time variations using standard unprivileged OS metrics to reliably fingerprint the input's geometry. In Tier 2, by profiling Last-Level Cache (LLC) contention, the attacker can resolve semantic ambiguity within identical geometries, distinguishing between visually dense (e.g., medical X-rays) and sparse (e.g., text documents) content. By evaluating state-of-the-art models such as LLaVA-NeXT and Qwen2-VL, we show that combining these signals enables reliable inference of privacy-sensitive contexts. Finally, we analyze the security engineering trade-offs of mitigating this vulnerability, reveal substantial performance overhead with constant-work padding, and propose practical design recommendations for secure Edge AI deployments.
Problem

Research questions and friction points this paper is trying to address.

side-channel attack
vision-language models
dynamic preprocessing
privacy leakage
edge AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

side-channel attack
vision-language models
dynamic preprocessing
cache contention
edge AI security
🔎 Similar Papers
No similar papers found.