🤖 AI Summary
Mobile sensor data, when accessed in real time by third-party applications, risks exposing sensitive user attributes (e.g., gender, identity). Existing privacy-preserving methods either compromise real-time performance—requiring full sequence acquisition—or distort spatiotemporal semantics, degrading downstream tasks such as activity recognition. This paper proposes the first online predictive adversarial privacy protection framework: lightweight adversarial perturbations are dynamically predicted and generated *at the moment of data acquisition*, leveraging historical signals without waiting for sequence completion. Innovatively integrating time-series forecasting with generative adversarial networks, we introduce gradient masking and adversarial training to achieve low-distortion, low-latency perturbation generation. Experiments show that our method reduces inference attack success rates on sensitive attributes to 40.11%–44.65%, increases equal error rates to 41.65%–46.22%, significantly outperforming baselines, while fully preserving accuracy for downstream activity recognition.
📝 Abstract
Mobile motion sensors such as accelerometers and gyroscopes are now ubiquitously accessible by third-party apps via standard APIs. While enabling rich functionalities like activity recognition and step counting, this openness has also enabled unregulated inference of sensitive user traits, such as gender, age, and even identity, without user consent. Existing privacy-preserving techniques, such as GAN-based obfuscation or differential privacy, typically require access to the full input sequence, introducing latency that is incompatible with real-time scenarios. Worse, they tend to distort temporal and semantic patterns, degrading the utility of the data for benign tasks like activity recognition. To address these limitations, we propose the Predictive Adversarial Transformation Network (PATN), a real-time privacy-preserving framework that leverages historical signals to generate adversarial perturbations proactively. The perturbations are applied immediately upon data acquisition, enabling continuous protection without disrupting application functionality. Experiments on two datasets demonstrate that PATN substantially degrades the performance of privacy inference models, achieving Attack Success Rate (ASR) of 40.11% and 44.65% (reducing inference accuracy to near-random) and increasing the Equal Error Rate (EER) from 8.30% and 7.56% to 41.65% and 46.22%. On ASR, PATN outperforms baseline methods by 16.16% and 31.96%, respectively.