🤖 AI Summary
This work investigates the vulnerability of linear sketches—such as Johnson–Lindenstrauss transforms and AMS sketches—to adversarial attacks in a black-box setting, specifically for ℓ₂-norm estimation. An attacker, given only query access to low-dimensional sketches (Av), can either distort norm estimates or construct adversarial inputs that break optimal estimators. We propose the first universal, non-adaptive attack that is independent of both the sketching matrix (A) and the estimator; it achieves query complexity (widetilde{O}(k^2)), matching the tight (widetilde{Omega}(k^2)) lower bound. This quadratic complexity reveals a fundamental structural parallel between compressed representations and adversarial examples in image classification. Crucially, our result provides the first theoretical proof that *any* fixed linear sketching system is inherently insecure under adversarial queries. The findings deliver a critical security warning for high-dimensional data compression and privacy-preserving systems relying on linear dimensionality reduction.
📝 Abstract
Dimensionality reduction via linear sketching is a powerful and widely used technique, but it is known to be vulnerable to adversarial inputs. We study the black-box adversarial setting, where a fixed, hidden sketching matrix A in $R^{k X n}$ maps high-dimensional vectors v $in R^n$ to lower-dimensional sketches A v in $R^k$, and an adversary can query the system to obtain approximate ell2-norm estimates that are computed from the sketch.
We present a universal, nonadaptive attack that, using tilde(O)($k^2$) queries, either causes a failure in norm estimation or constructs an adversarial input on which the optimal estimator for the query distribution (used by the attack) fails. The attack is completely agnostic to the sketching matrix and to the estimator: It applies to any linear sketch and any query responder, including those that are randomized, adaptive, or tailored to the query distribution.
Our lower bound construction tightly matches the known upper bounds of tilde(Omega)($k^2$), achieved by specialized estimators for Johnson Lindenstrauss transforms and AMS sketches. Beyond sketching, our results uncover structural parallels to adversarial attacks in image classification, highlighting fundamental vulnerabilities of compressed representations.