Hide and Seek in Embedding Space: Geometry-based Steganography and Detection in Large Language Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security vulnerability of existing steganographic methods in large language models, which are often fully recoverable and thus prone to detection. To mitigate this risk, the authors propose a novel steganographic approach leveraging the geometric structure of the embedding space to significantly reduce message recoverability, and for the first time, utilize this structure to construct a covert communication channel. Additionally, they introduce an interpretability technique based on linear probing to effectively detect steganographic behavior in maliciously fine-tuned models. Experimental results on Llama-8B, Mistral-8B, and Llama-70B demonstrate that the proposed method substantially lowers the recoverability of hidden messages while improving detection accuracy by up to 33% over baseline approaches.

Technology Category

Application Category

📝 Abstract
Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels. Prior work demonstrated this threat but relied on trivially recoverable encodings. We formalize payload recoverability via classifier accuracy and show previous schemes achieve 100\% recoverability. In response, we introduce low-recoverability steganography, replacing arbitrary mappings with embedding-space-derived ones. For Llama-8B (LoRA) and Ministral-8B (LoRA) trained on TrojanStego prompts, exact secret recovery rises from 17$\rightarrow$30\% (+78\%) and 24$\rightarrow$43\% (+80\%) respectively, while on Llama-70B (LoRA) trained on Wiki prompts, it climbs from 9$\rightarrow$19\% (+123\%), all while reducing payload recoverability. We then discuss detection. We argue that detecting fine-tuning-based steganographic attacks requires approaches beyond traditional steganalysis. Standard approaches measure distributional shift, which is an expected side-effect of fine-tuning. Instead, we propose a mechanistic interpretability approach: linear probes trained on later-layer activations detect the secret with up to 33\% higher accuracy in fine-tuned models compared to base models, even for low-recoverability schemes. This suggests that malicious fine-tuning leaves actionable internal signatures amenable to interpretability-based defenses.
Problem

Research questions and friction points this paper is trying to address.

steganography
large language models
fine-tuning
payload recoverability
steganalysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometry-based steganography
low-recoverability steganography
embedding space
mechanistic interpretability
steganalysis
🔎 Similar Papers
No similar papers found.
C
Charles Westphal
UCL Centre for Artificial Intelligence, University College London, UK; ML Alignment Theory Scholars, Berkeley, CA, USA
K
K. Navaie
School of Computing and Communications, Lancaster University, UK
Fernando E. Rosas
Fernando E. Rosas
Lecturer at University of Sussex
ComplexityEmergenceAI SafetyComputational Neuroscience