ArtistAuditor: Auditing Artist Style Pirate in Text-to-Image Generation Models

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Copyright auditing of unauthorized fine-tuning and style appropriation from artists in diffusion models (e.g., Stable Diffusion) remains challenging due to the lack of non-intrusive, post-deployment detection methods. Method: We propose a black-box, model-agnostic style misuse detection framework that models artistic style as multi-granularity feature distributions in the latent space. Our approach requires neither original training data nor model modification or internal access; instead, it leverages cross-model transferable auditing via a style representation extractor and a distribution-aware discriminator. Contribution/Results: Evaluated on six model-dataset pairs, our method achieves AUC ≥ 0.937 across all settings, demonstrating strong generalizability and module robustness. It has been deployed on an operational online platform for real-world copyright enforcement. To the best of our knowledge, this is the first work enabling universal, data-free, and model-access-free style appropriation auditing for diffusion models.

Technology Category

Application Category

📝 Abstract
Text-to-image models based on diffusion processes, such as DALL-E, Stable Diffusion, and Midjourney, are capable of transforming texts into detailed images and have widespread applications in art and design. As such, amateur users can easily imitate professional-level paintings by collecting an artist's work and fine-tuning the model, leading to concerns about artworks' copyright infringement. To tackle these issues, previous studies either add visually imperceptible perturbation to the artwork to change its underlying styles (perturbation-based methods) or embed post-training detectable watermarks in the artwork (watermark-based methods). However, when the artwork or the model has been published online, i.e., modification to the original artwork or model retraining is not feasible, these strategies might not be viable. To this end, we propose a novel method for data-use auditing in the text-to-image generation model. The general idea of ArtistAuditor is to identify if a suspicious model has been finetuned using the artworks of specific artists by analyzing the features related to the style. Concretely, ArtistAuditor employs a style extractor to obtain the multi-granularity style representations and treats artworks as samplings of an artist's style. Then, ArtistAuditor queries a trained discriminator to gain the auditing decisions. The experimental results on six combinations of models and datasets show that ArtistAuditor can achieve high AUC values (>0.937). By studying ArtistAuditor's transferability and core modules, we provide valuable insights into the practical implementation. Finally, we demonstrate the effectiveness of ArtistAuditor in real-world cases by an online platform Scenario. ArtistAuditor is open-sourced at https://github.com/Jozenn/ArtistAuditor.
Problem

Research questions and friction points this paper is trying to address.

Detects unauthorized artist style imitation in text-to-image models
Audits fine-tuned models using specific artists' artworks
Identifies style piracy without modifying original artworks or models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Style extractor analyzes multi-granularity representations
Discriminator queries for auditing decisions
Achieves high AUC values (> 0.937)
🔎 Similar Papers
No similar papers found.
Linkang Du
Linkang Du
Xi'an Jiaotong University
Trustworthy Machine LearningDifferential Privacy
Z
Zheng Zhu
Zhejiang University, Hangzhou, China; The Chinese University of Hong Kong, Hong Kong, China
M
Min Chen
Vrije Universiteit Amsterdam, Amsterdam, Netherlands
Zhou Su
Zhou Su
Xi'an Jiaotong University
Shouling Ji
Shouling Ji
Professor, Zhejiang University & Georgia Institute of Technology
Data-driven SecurityAI SecuritySoftware ScurityPrivacy
P
Peng Cheng
Zhejiang University, Hangzhou, China
J
Jiming Chen
Zhejiang University, Hangzhou, China; Hangzhou Dianzi University, Hangzhou, China
Zhikun Zhang
Zhikun Zhang
Assistant Professor, Zhejiang University
Trustworthy AIData PrivacyDifferential Privacy