🤖 AI Summary
Tracing the origin of LLM-generated text in black-box querying scenarios remains challenging—especially under multilingual, multi-domain settings with heterogeneous models and high data noise—leading to low identification accuracy and undermining model traceability and security.
Method: We propose the first fine-grained fingerprint detection framework for LLM-generated text. We construct FD-Datasets, a multilingual, multi-domain benchmark covering 20 mainstream LLMs and 90K samples. Leveraging Qwen2.5-7B, we employ LoRA-efficient fine-tuning, integrating language-agnostic feature extraction with a multi-task discriminative head.
Contribution/Results: Our framework achieves a macro-F1 score of 89.2%, outperforming the state-of-the-art baseline LM-D by 16.7 percentage points. It significantly enhances black-box provenance attribution under cross-model, low-resource, and high-noise conditions, advancing robust, scalable, and secure LLM forensics.
📝 Abstract
Using large language models (LLMs) integration platforms without transparency about which LLM is being invoked can lead to potential security risks. Specifically, attackers may exploit this black-box scenario to deploy malicious models and embed viruses in the code provided to users. In this context, it is increasingly urgent for users to clearly identify the LLM they are interacting with, in order to avoid unknowingly becoming victims of malicious models. However, existing studies primarily focus on mixed classification of human and machine-generated text, with limited attention to classifying texts generated solely by different models. Current research also faces dual bottlenecks: poor quality of LLM-generated text (LLMGT) datasets and limited coverage of detectable LLMs, resulting in poor detection performance for various LLMGT in black-box scenarios. We propose the first LLMGT fingerprint detection model, extbf{FDLLM}, based on Qwen2.5-7B and fine-tuned using LoRA to address these challenges. FDLLM can more efficiently handle detection tasks across multilingual and multi-domain scenarios. Furthermore, we constructed a dataset named extbf{FD-Datasets}, consisting of 90,000 samples that span multiple languages and domains, covering 20 different LLMs. Experimental results demonstrate that FDLLM achieves a macro F1 score 16.7% higher than the best baseline method, LM-D.