🤖 AI Summary
This work addresses the lack of efficient, low-overhead real-time defense mechanisms in existing black-box large language models against targeted attacks such as backdoor injections and prompt injection. The authors propose a lightweight runtime detection framework that, for the first time, identifies and leverages a distinctive “dual-entropy valley” phenomenon—observed during the generation process upon attack triggering—as a detection criterion. By integrating dynamic monitoring based on token-level probability entropy with task-flipping verification, the framework establishes a two-stage detection mechanism. Evaluated across diverse attack scenarios, the method achieves high detection accuracy with near-zero false positive rates, while incurring negligible inference overhead, making it highly suitable for practical deployment.
📝 Abstract
Recent intelligent systems integrate powerful Large Language Models (LLMs) through APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches for such threats typically rely on high access rights, impose prohibitive costs, and hinder normal inference, rendering them impractical for real-world scenarios. To solve these limitations, we introduce DualSentinel, a lightweight and unified defense framework that can accurately and promptly detect the activation of targeted attacks alongside the LLM generation process. We first identify a characteristic of compromised LLMs, termed Entropy Lull: when a targeted attack successfully hijacks the generation process, the LLM exhibits a distinct period of abnormally low and stable token probability entropy, indicating it is following a fixed path rather than making creative choices. DualSentinel leverages this pattern by developing an innovative dual-check approach. It first employs a magnitude and trend-aware monitoring method to proactively and sensitively flag an entropy lull pattern at runtime. Upon such flagging, it triggers a lightweight yet powerful secondary verification based on task-flipping. An attack is confirmed only if the entropy lull pattern persists across both the original and the flipped task, proving that the LLM's output is coercively controlled. Extensive evaluations show that DualSentinel is both highly effective (superior detection accuracy with near-zero false positives) and remarkably efficient (negligible additional cost), offering a truly practical path toward securing deployed LLMs. The source code can be accessed at https://doi.org/10.5281/zenodo.18479273.