Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation

📅 2023-09-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of stealing black-box image classification models under an extremely low query budget (≤100 API calls), without access to the original training data, model architecture, or parameters. To tackle the severe few-call constraint, we propose the first unified framework integrating diffusion-based surrogate data generation, uncertainty-driven active learning, and curriculum-style self-paced knowledge distillation. Specifically, we leverage diffusion models to synthesize highly discriminative surrogate samples; employ an uncertainty-guided active querying strategy to maximize labeling efficiency; and apply semi-supervised distillation in a curriculum manner to progressively enhance student model fidelity. Evaluated on three benchmark datasets, our method significantly outperforms four state-of-the-art baselines, achieving high-fidelity model replication within ≤100 queries. The source code is publicly available.
📝 Abstract
Diffusion models showcase strong capabilities in image synthesis, being used in many computer vision tasks with great success. To this end, we propose to explore a new use case, namely to copy black-box classification models without having access to the original training data, the architecture, and the weights of the model, i.e. the model is only exposed through an inference API. More specifically, we can only observe the (soft or hard) labels for some image samples passed as input to the model. Furthermore, we consider an additional constraint limiting the number of model calls, mostly focusing our research on few-call model stealing. In order to solve the model extraction task given the applied restrictions, we propose the following framework. As training data, we create a synthetic data set (called proxy data set) by leveraging the ability of diffusion models to generate realistic and diverse images. Given a maximum number of allowed API calls, we pass the respective number of samples through the black-box model to collect labels. Finally, we distill the knowledge of the black-box teacher (attacked model) into a student model (copy of the attacked model), harnessing both labeled and unlabeled data generated by the diffusion model. We employ a novel active self-paced learning framework to make the most of the proxy data during distillation. Our empirical results on three data sets confirm the superiority of our framework over four state-of-the-art methods in the few-call model extraction scenario. We release our code for free non-commercial use at https://github.com/vladhondru25/model-stealing.
Problem

Research questions and friction points this paper is trying to address.

Copy black-box models without original data or architecture access.
Limit model calls for efficient few-call model stealing.
Use diffusion models to generate synthetic training data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for synthetic image generation
Implements active self-paced knowledge distillation
Limits API calls for efficient model extraction
🔎 Similar Papers
No similar papers found.