Can Safety Emerge from Weak Supervision? A Systematic Analysis of Small Language Models

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety alignment of large language models relies on costly, static human annotations and red-teaming evaluations, which struggle to adapt to model evolution and often compromise utility. This work proposes Self-MOA, a framework that establishes the first fully automated safety alignment loop: it employs weak supervision to automatically evaluate the model, dynamically generates targeted red-team prompts, self-constructs preference data, and applies multi-objective preference optimization to simultaneously enhance both safety and usefulness. Evaluated across multiple small language models and safety benchmarks, Self-MOA achieves a 12.41% improvement in safety while preserving model utility, using only approximately one-eleventh of the human supervision data required by conventional approaches.

Technology Category

Application Category

📝 Abstract
Safety alignment is critical for deploying large language models (LLMs) in real-world applications, yet most existing approaches rely on large human-annotated datasets and static red-teaming benchmarks that are costly, difficult to scale, and slow to adapt to evolving model behaviors. Moreover, overly conservative safety mechanisms can reduce model usefulness by rejecting sensitive but legitimate queries. We introduce Self-MOA (Self Multi-Objective Alignment), a fully automated framework for aligning small language models using weak supervision from automated evaluator models. Self-MOA operates as a closed loop that dynamically generates model-specific red team prompts, constructs preference data from model-generated responses, and aligns models via multi-objective preference optimization to jointly optimize for safety and helpfulness. Across multiple small language models and safety benchmarks, Self-MOA achieves a 12.41\% improvement in safety while preserving helpfulness, using as little as 11 times less training data than human-supervised alignment baselines. These results demonstrate that adaptive, automated alignment can reduce the dependence on static, human-curated safety pipelines in resource-constrained settings.
Problem

Research questions and friction points this paper is trying to address.

safety alignment
weak supervision
red-teaming
helpfulness
language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

weak supervision
automated alignment
multi-objective optimization
red teaming
small language models
🔎 Similar Papers
No similar papers found.
P
Punyajoy Saha
Samsung Research Institute Bangalore, India
S
Sudipta Halder
Samsung Research Institute Bangalore, India
D
Debjyoti Mondal
Samsung Research Institute Bangalore, India
Subhadarshi Panda
Subhadarshi Panda
City University of New York
machine learningdeep learningnatural language processingcross-lingual NLPcross-domain NLP