ReGA: Representation-Guided Abstraction for Model-based Safeguarding of LLMs

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the security risks of large language models (LLMs)—specifically their vulnerability to jailbreaking attacks and propensity to generate harmful content—this paper proposes a real-time monitoring framework based on representation-guided abstraction. The method jointly leverages representation learning, abstract interpretation, safety-sensitive direction extraction, and finite-state machine modeling. Its core contribution is the first formal definition and extraction of “safety-critical representations”: low-dimensional latent state directions that enable scalable, interpretable, and generalizable abstraction across diverse safety dimensions while maintaining robustness against real-world adversarial attacks. Evaluated on prompt-level and dialogue-level harmful content detection, the framework achieves AUROC scores of 0.975 and 0.985, respectively—substantially outperforming existing defense paradigms. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved significant success in various tasks, yet concerns about their safety and security have emerged. In particular, they pose risks in generating harmful content and vulnerability to jailbreaking attacks. To analyze and monitor machine learning models, model-based analysis has demonstrated notable potential in stateful deep neural networks, yet suffers from scalability issues when extending to LLMs due to their vast feature spaces. In this paper, we propose ReGA, a model-based analysis framework with representation-guided abstraction, to safeguard LLMs against harmful prompts and generations. By leveraging safety-critical representations, which are low-dimensional directions emerging in hidden states that indicate safety-related concepts, ReGA effectively addresses the scalability issue when constructing the abstract model for safety modeling. Our comprehensive evaluation shows that ReGA performs sufficiently well in distinguishing between safe and harmful inputs, achieving an AUROC of 0.975 at the prompt level and 0.985 at the conversation level. Additionally, ReGA exhibits robustness to real-world attacks and generalization across different safety perspectives, outperforming existing safeguard paradigms in terms of interpretability and scalability. Overall, ReGA serves as an efficient and scalable solution to enhance LLM safety by integrating representation engineering with model-based abstraction, paving the way for new paradigms to utilize software insights for AI safety. Our code is available at https://github.com/weizeming/ReGA.
Problem

Research questions and friction points this paper is trying to address.

Safeguarding LLMs from harmful content generation
Addressing scalability in model-based safety analysis
Detecting and preventing jailbreaking attacks on LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representation-guided abstraction for LLM safety
Leveraging safety-critical low-dimensional representations
Model-based framework with interpretability and scalability
🔎 Similar Papers
No similar papers found.