π€ AI Summary
This work addresses the vulnerability of large language models to attacks such as jailbreaking and prompt injection, which can bypass existing alignment mechanisms and elicit harmful outputs. To tackle this challenge, we introduce SecureBreak, the first high-quality annotated dataset specifically designed to detect residual harmful content resulting from insufficient safety alignment. SecureBreak encompasses diverse safety risk categories and employs a conservative human annotation strategy to ensure high reliability. Leveraging this dataset, we integrate fine-tuning with safety filtering techniques, substantially improving the modelβs ability to recognize unsafe content. Experimental results demonstrate that models fine-tuned on SecureBreak achieve significantly enhanced safety performance across multiple risk categories, making them well-suited for post-deployment safety filtering and alignment refinement.
π Abstract
Large language models are becoming pervasive core components in many real-world applications. As a consequence, security alignment represents a critical requirement for their safe deployment. Although previous related works focused primarily on model architectures and alignment methodologies, these approaches alone cannot ensure the complete elimination of harmful generations. This concern is reinforced by the growing body of scientific literature showing that attacks, such as jailbreaking and prompt injection, can bypass existing security alignment mechanisms. As a consequence, additional security strategies are needed both to provide qualitative feedback on the robustness of the obtained security alignment at the training stage, and to create an ``ultimate'' defense layer to block unsafe outputs possibly produced by deployed models. To provide a contribution in this scenario, this paper introduces SecureBreak, a safety-oriented dataset designed to support the development of AI-driven solutions for detecting harmful LLM outputs caused by residual weaknesses in security alignment. The dataset is highly reliable due to careful manual annotation, where labels are assigned conservatively to ensure safety. It performs well in detecting unsafe content across multiple risk categories. Tests with pre-trained LLMs show improved results after fine-tuning on SecureBreak. Overall, the dataset is useful both for post-generation safety filtering and for guiding further model alignment and security improvements.