🤖 AI Summary
This work addresses a critical privacy vulnerability in locally deployed vision-language models (VLMs) that employ dynamic high-resolution preprocessing techniques such as AnyRes. The authors demonstrate that such mechanisms inadvertently introduce algorithmic side channels, exposing both geometric and semantic attributes of input images. They propose the first two-tier side-channel attack framework, combining OS-level execution timing monitoring with last-level cache (LLC) contention analysis, to simultaneously extract geometric fingerprints and semantic density leakage from the dynamic preprocessing pipeline. Experiments on LLaVA-NeXT and Qwen2-VL show that the approach can accurately infer sensitive user-provided visual context. The study further provides a systematic evaluation of the performance overhead of existing countermeasures and concludes with practical security design recommendations for edge AI deployments.
📝 Abstract
On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic High-Resolution preprocessing (e.g., AnyRes) introduces an inherent algorithmic side-channel. Unlike static models, dynamic preprocessing decomposes images into a variable number of patches based on their aspect ratio, creating workload-dependent inputs. We demonstrate a dual-layer attack framework against local VLMs. In Tier 1, an unprivileged attacker can exploit significant execution-time variations using standard unprivileged OS metrics to reliably fingerprint the input's geometry. In Tier 2, by profiling Last-Level Cache (LLC) contention, the attacker can resolve semantic ambiguity within identical geometries, distinguishing between visually dense (e.g., medical X-rays) and sparse (e.g., text documents) content. By evaluating state-of-the-art models such as LLaVA-NeXT and Qwen2-VL, we show that combining these signals enables reliable inference of privacy-sensitive contexts. Finally, we analyze the security engineering trade-offs of mitigating this vulnerability, reveal substantial performance overhead with constant-work padding, and propose practical design recommendations for secure Edge AI deployments.