Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite post-training alignment, current large language models (LLMs) remain vulnerable to safety risks, necessitating coordinated input- and output-side safeguards. To address this, we propose Roblox Guard 1.0—a Llama-3.1-8B-Instruct-based instruction-tuned model featuring a novel taxonomy-adaptive joint moderation mechanism that enables zero-shot generalization to unseen safety categories. We further design a multi-stage LLM moderation pipeline integrating chain-of-thought reasoning and input inversion to enhance contextual understanding and decision robustness. Additionally, we introduce RobloxGuard-Eval, a new benchmark with an extensible safety taxonomy. Experiments demonstrate that Roblox Guard 1.0 significantly improves detection of emerging safety threats across diverse domains. RobloxGuard-Eval establishes the first standardized, extensible evaluation framework for assessing LLM safety guardrails.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are typically aligned for safety during the post-training phase; however, they may still generate inappropriate outputs that could potentially pose risks to users. This challenge underscores the need for robust safeguards that operate across both model inputs and outputs. In this work, we introduce Roblox Guard 1.0, a state-of-the-art instruction fine-tuned LLM designed to enhance the safety of LLM systems through comprehensive input-output moderation, using a pipeline of LLMs to enhance moderation capability. Built on the Llama-3.1-8B-Instruct backbone, our model is instruction fine-tuned to generalize across previously unseen safety taxonomies and demonstrates strong performance on out-of-domain safety benchmarks. The instruction fine-tuning process uses a mix of synthetic and open-source safety datasets, augmented with chain-of-thought (CoT) rationales and input inversion to enhance contextual understanding and decision making. To support systematic evaluation, we also release RobloxGuard-Eval, a new benchmark featuring an extensible safety taxonomy to assess the effectiveness of LLM guardrails and moderation frameworks.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM safety via input-output moderation
Generalizes across unseen safety taxonomies
Evaluates moderation frameworks with extensible benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction fine-tuned LLM for input-output moderation
Pipeline of LLMs to enhance moderation capability
Chain-of-thought and input inversion for contextual understanding
🔎 Similar Papers
No similar papers found.