🤖 AI Summary
This work addresses the lack of efficient and effective defense mechanisms for small language models (SLMs) against diverse jailbreaking attacks. It reveals, for the first time, that malicious and benign inputs exhibit distinguishable activation patterns in the hidden layers of SLMs’ internal representation space. Building on this insight, the authors propose a lightweight, real-time defense method that analyzes token-level activation signatures to construct an architecture-agnostic filtering mechanism, enabling the detection and blocking of adversarial prompts during inference. Experimental results demonstrate that the approach significantly enhances robustness against nine types of jailbreaking attacks across seven SLMs and three large language models (LLMs), making it particularly suitable for resource-constrained edge deployment scenarios.
📝 Abstract
Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token activation-based method that operates in the representation space to filter malicious prompts during inference while preserving benign ones. Our findings highlight robustness limitations across layers of language models and provide a practical direction for secure small language model deployment.