🤖 AI Summary
This work uncovers an intrinsic trade-off between improved model performance and heightened membership privacy leakage in contrastive learning encoders. Addressing key limitations of existing membership inference attacks (MIAs)—namely, their reliance on labels or gradients and poor robustness—we propose the Embedding Lp-Norm Likelihood Attack (LpLA), a label- and gradient-free MIA. LpLA is the first to empirically establish a positive correlation between encoder architectural complexity and privacy leakage intensity, and it models membership likelihood based on the statistical properties of p-norms of learned embedding vectors. Extensive experiments across multiple datasets and encoder architectures demonstrate that LpLA significantly outperforms state-of-the-art MIAs under low-query-budget and weak-prior conditions. Our findings introduce a novel dimension—embedding norm statistics—for encoder privacy risk assessment and provide empirical grounding for understanding privacy-performance trade-offs in self-supervised representation learning.
📝 Abstract
With the rapid advancement of deep learning technology, pre-trained encoder models have demonstrated exceptional feature extraction capabilities, playing a pivotal role in the research and application of deep learning. However, their widespread use has raised significant concerns about the risk of training data privacy leakage. This paper systematically investigates the privacy threats posed by membership inference attacks (MIAs) targeting encoder models, focusing on contrastive learning frameworks. Through experimental analysis, we reveal the significant impact of model architecture complexity on membership privacy leakage: As more advanced encoder frameworks improve feature-extraction performance, they simultaneously exacerbate privacy-leakage risks. Furthermore, this paper proposes a novel membership inference attack method based on the p-norm of feature vectors, termed the Embedding Lp-Norm Likelihood Attack (LpLA). This method infers membership status, by leveraging the statistical distribution characteristics of the p-norm of feature vectors. Experimental results across multiple datasets and model architectures demonstrate that LpLA outperforms existing methods in attack performance and robustness, particularly under limited attack knowledge and query volumes. This study not only uncovers the potential risks of privacy leakage in contrastive learning frameworks, but also provides a practical basis for privacy protection research in encoder models. We hope that this work will draw greater attention to the privacy risks associated with self-supervised learning models and shed light on the importance of a balance between model utility and training data privacy. Our code is publicly available at: https://github.com/SeroneySun/LpLA_code.