Do Large Language Models Reflect Demographic Pluralism in Safety?

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing safety alignment datasets for large language models, which predominantly rely on annotators from homogeneous demographic backgrounds and thus fail to capture diverse societal perspectives on safety. To overcome this, the authors propose Demo-SafetyBench, a novel benchmark that explicitly models demographic diversity at the prompt level by decoupling value frameworks from model responses, thereby enabling a scalable and demographically robust safety evaluation framework. Leveraging Mistral-7B and Llama-3.1 for safety reclassification and data augmentation, followed by SimHash-based deduplication, the dataset is evaluated via zero-shot scoring using Gemma-7B, GPT-4o, and LLaMA-2-7B. The resulting benchmark comprises 43,050 samples, demonstrating high inter-rater reliability (ICC=0.87) and low demographic sensitivity (DS=0.12), thereby validating the feasibility and robustness of demographically inclusive safety assessment.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) safety is inherently pluralistic, reflecting variations in moral norms, cultural expectations, and demographic contexts. Yet, existing alignment datasets such as ANTHROPIC-HH and DICES rely on demographically narrow annotator pools, overlooking variation in safety perception across communities. Demo-SafetyBench addresses this gap by modeling demographic pluralism directly at the prompt level, decoupling value framing from responses. In Stage I, prompts from DICES are reclassified into 14 safety domains (adapted from BEAVERTAILS) using Mistral 7B-Instruct-v0.3, retaining demographic metadata and expanding low-resource domains via Llama-3.1-8B-Instruct with SimHash-based deduplication, yielding 43,050 samples. In Stage II, pluralistic sensitivity is evaluated using LLMs-as-Raters-Gemma-7B, GPT-4o, and LLaMA-2-7B-under zero-shot inference. Balanced thresholds (delta = 0.5, tau = 10) achieve high reliability (ICC = 0.87) and low demographic sensitivity (DS = 0.12), confirming that pluralistic safety evaluation can be both scalable and demographically robust.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Safety Alignment
Demographic Pluralism
Annotation Bias
Value Diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

demographic pluralism
safety evaluation
LLM-as-Rater
prompt-level modeling
value decoupling
🔎 Similar Papers
No similar papers found.