🤖 AI Summary
This work addresses the challenge of membership inference attacks (MIAs) in a strict black-box setting where only generated text is accessible, a scenario in which existing methods struggle to generalize and often rely on internal model information, thus failing to effectively assess training data privacy risks in large language models (LLMs). To overcome this limitation, we propose SimMIA, a novel framework that leverages an adaptive sampling strategy and a new scoring mechanism to perform effective membership inference using only model-generated text. We introduce WikiMIA-25, a new benchmark tailored for real-world closed-source LLMs, and demonstrate that SimMIA significantly outperforms current pure black-box approaches—achieving performance comparable to strong baselines that access logits and, for the first time, matching the efficacy of white-box methods in a purely text-based black-box setting.
📝 Abstract
Membership Inference Attacks (MIAs) act as a crucial auditing tool for the opaque training data of Large Language Models (LLMs). However, existing techniques predominantly rely on inaccessible model internals (e.g., logits) or suffer from poor generalization across domains in strict black-box settings where only generated text is available. In this work, we propose SimMIA, a robust MIA framework tailored for this text-only regime by leveraging an advanced sampling strategy and scoring mechanism. Furthermore, we present WikiMIA-25, a new benchmark curated to evaluate MIA performance on modern proprietary LLMs. Experiments demonstrate that SimMIA achieves state-of-the-art results in the black-box setting, rivaling baselines that exploit internal model information.