LLMs Have Rhythm: Fingerprinting Large Language Models Using Inter-Token Times and Network Traffic Analysis

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of real-time, secure model identification in large language model (LLM) deployment scenarios, this paper proposes a passive, non-intrusive LLM fingerprinting method. The approach leverages the intrinsic inter-token timing (ITT) rhythm—arising from autoregressive token generation—as a lightweight, hardware-agnostic fingerprint, and jointly models packet-level timing features from encrypted network traffic. Crucially, it enables end-to-end real-time identification without accessing model outputs, weights, or decrypting traffic. The method integrates deep temporal modeling with multi-environment adaptation (GPU/CPU/VPN/LAN/remote). Evaluated on 16 open-source and 10 commercial closed-source LLMs, it demonstrates strong cross-network robustness and significantly overcomes key limitations of conventional output-based analysis—namely, vulnerability to adversarial attacks, high latency, and dependence on model access.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) become increasingly integrated into many technological ecosystems across various domains and industries, identifying which model is deployed or being interacted with is critical for the security and trustworthiness of the systems. Current verification methods typically rely on analyzing the generated output to determine the source model. However, these techniques are susceptible to adversarial attacks, operate in a post-hoc manner, and may require access to the model weights to inject a verifiable fingerprint. In this paper, we propose a novel passive and non-invasive fingerprinting technique that operates in real-time and remains effective even under encrypted network traffic conditions. Our method leverages the intrinsic autoregressive generation nature of language models, which generate text one token at a time based on all previously generated tokens, creating a unique temporal pattern like a rhythm or heartbeat that persists even when the output is streamed over a network. We find that measuring the Inter-Token Times (ITTs)-time intervals between consecutive tokens-can identify different language models with high accuracy. We develop a Deep Learning (DL) pipeline to capture these timing patterns using network traffic analysis and evaluate it on 16 Small Language Models (SLMs) and 10 proprietary LLMs across different deployment scenarios, including local host machine (GPU/CPU), Local Area Network (LAN), Remote Network, and Virtual Private Network (VPN). The experimental results confirm that our proposed technique is effective and maintains high accuracy even when tested in different network conditions. This work opens a new avenue for model identification in real-world scenarios and contributes to more secure and trustworthy language model deployment.
Problem

Research questions and friction points this paper is trying to address.

Identify deployed LLMs for system security and trustworthiness.
Develop real-time, non-invasive fingerprinting under encrypted traffic.
Use Inter-Token Times and network analysis for model identification.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Passive fingerprinting using inter-token times
Real-time identification under encrypted traffic
Deep learning pipeline for network traffic analysis
🔎 Similar Papers
No similar papers found.
Saeif Alhazbi
Saeif Alhazbi
Hamad Bin Khalifa University
AI SecurityAI privacyAI for CybersecurityAI Safety
Ahmed Mohamed Hussain
Ahmed Mohamed Hussain
Pre-Doctoral Researcher | KTH Royal Institute of Technology
SecurityPrivacyIoTAI for CybersecurityAI Trustworthiness
G
G. Oligeri
College of Science and Engineering (CSE), Hamad Bin Khalifa University (HBKU) – Doha, Qatar
P
P. Papadimitratos
Networked Systems Security Group, KTH Royal Institute of Technology – Stockholm, Sweden