GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and effective defense mechanisms for small language models (SLMs) against diverse jailbreaking attacks. It reveals, for the first time, that malicious and benign inputs exhibit distinguishable activation patterns in the hidden layers of SLMs’ internal representation space. Building on this insight, the authors propose a lightweight, real-time defense method that analyzes token-level activation signatures to construct an architecture-agnostic filtering mechanism, enabling the detection and blocking of adversarial prompts during inference. Experimental results demonstrate that the approach significantly enhances robustness against nine types of jailbreaking attacks across seven SLMs and three large language models (LLMs), making it particularly suitable for resource-constrained edge deployment scenarios.
📝 Abstract
Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token activation-based method that operates in the representation space to filter malicious prompts during inference while preserving benign ones. Our findings highlight robustness limitations across layers of language models and provide a practical direction for secure small language model deployment.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
Small Language Models
defense
internal representations
safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

token activation
jailbreak defense
small language models
representation space
malicious prompt filtering
🔎 Similar Papers
No similar papers found.
Md Jueal Mia
Md Jueal Mia
Graduate Research Assistant, knight foundation school of computing and information sciences, FIU
Privacy and SecurityFederated LearningMachine LearningLarge Language Model
J
Joaquin Molto
Knight Foundation School of Computing and Information Sciences, Florida International University; Security, Optimization, and Learning for InterDependent networks laboratory (solid lab), Florida International University
Y
Yanzhao Wu
Knight Foundation School of Computing and Information Sciences, Florida International University
M. Hadi Amini
M. Hadi Amini
Associate Professor, Florida International University
Distributed LearningEdge AITrustworthy AICPS Security