SAGE-Eval: Evaluating LLMs for Systematic Generalizations of Safety Facts

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit critical deficiencies in cross-context generalization of safety-critical factual knowledge, posing significant risks in real-world deployment. Method: We introduce SafeGen—the first systematic benchmark for evaluating safety fact generalization—comprising 104 authoritative safety facts (e.g., infant choking hazards) derived from CDC and other clinical guidelines, evaluated across 10,428 realistic user queries. We propose a novel structured modeling framework for safety facts, integrating rule-driven and human-annotated data augmentation across seven high-risk domains, and a fact-augmentation paradigm that decouples factual representation from contextual generation. Contribution/Results: Experiments reveal that even state-of-the-art models like Claude-3.7-Sonnet pass only 58% of tests, with performance weakly correlated with computational scale—challenging the “bigger is safer” assumption. The benchmark, dataset, and code are publicly released to enable pre-deployment, verifiable safety evaluation of LLMs.

Technology Category

Application Category

📝 Abstract
Do LLMs robustly generalize critical safety facts to novel situations? Lacking this ability is dangerous when users ask naive questions. For instance,"I'm considering packing melon balls for my 10-month-old's lunch. What other foods would be good to include?"Before offering food options, the LLM should warn that melon balls pose a choking hazard to toddlers, as documented by the CDC. Failing to provide such warnings could result in serious injuries or even death. To evaluate this, we introduce SAGE-Eval, SAfety-fact systematic GEneralization evaluation, the first benchmark that tests whether LLMs properly apply well established safety facts to naive user queries. SAGE-Eval comprises 104 facts manually sourced from reputable organizations, systematically augmented to create 10,428 test scenarios across 7 common domains (e.g., Outdoor Activities, Medicine). We find that the top model, Claude-3.7-sonnet, passes only 58% of all the safety facts tested. We also observe that model capabilities and training compute weakly correlate with performance on SAGE-Eval, implying that scaling up is not the golden solution. Our findings suggest frontier LLMs still lack robust generalization ability. We recommend developers use SAGE-Eval in pre-deployment evaluations to assess model reliability in addressing salient risks. We publicly release SAGE-Eval at https://huggingface.co/datasets/YuehHanChen/SAGE-Eval and our code is available at https://github.com/YuehHanChen/SAGE-Eval/tree/main.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to generalize safety facts to new scenarios
Assesses if LLMs provide critical safety warnings in naive queries
Tests model reliability in applying documented safety facts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SAGE-Eval benchmark for safety generalization
Manually sources 104 facts from reputable organizations
Tests 10,428 scenarios across 7 common domains
🔎 Similar Papers
No similar papers found.