Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

๐Ÿ“… 2026-03-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses a critical blind spot in the safety alignment mechanisms of large language models (LLMs), which frequently misclassify legitimate cybersecurity defense requests as abusive due to overreliance on semantic similarity to sensitive keywords, while neglecting user intent and authorization status. Drawing on 2,390 real-world defensive scenarios from NCCDC competition data, the work introduces the concept of โ€œdefensive refusal biasโ€ and demonstrates that legitimate queries containing security-sensitive terms are rejected 2.72 times more often than semantically equivalent neutral queries (p<0.001). Notably, rejection rates reach 43.8% for system hardening tasks and 34.3% for malware analysis. Counterintuitively, explicitly stating authorization further increases rejection likelihood, revealing a fundamental flaw in current alignment approaches within cybersecurity contexts.

Technology Category

Application Category

๐Ÿ“ Abstract
Safety alignment in large language models (LLMs), particularly for cybersecurity tasks, primarily focuses on preventing misuse. While this approach reduces direct harm, it obscures a complementary failure mode: denial of assistance to legitimate defenders. We study Defensive Refusal Bias -- the tendency of safety-tuned frontier LLMs to refuse assistance for authorized defensive cybersecurity tasks when those tasks include similar language to an offensive cyber task. Based on 2,390 real-world examples from the National Collegiate Cyber Defense Competition (NCCDC), we find that LLMs refuse defensive requests containing security-sensitive keywords at $2.72\times$ the rate of semantically equivalent neutral requests ($p < 0.001$). The highest refusal rates occur in the most operationally critical tasks: system hardening (43.8%) and malware analysis (34.3%). Interestingly, explicit authorization, where the user directly instructs the model that they have authority to complete the target task, increases refusal rates, suggesting models interpret justifications as adversarial rather than exculpatory. These findings are urgent for interactive use and critical for autonomous defensive agents, which cannot rephrase refused queries or retry. Our findings suggest that current LLM cybersecurity alignment relies on semantic similarity to harmful content rather than reasoning about intent or authorization. We call for mitigations that analyze intent to maximize defensive capabilities while still preventing harmful compliance.
Problem

Research questions and friction points this paper is trying to address.

Defensive Refusal Bias
safety alignment
cybersecurity
large language models
authorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defensive Refusal Bias
safety alignment
large language models
cybersecurity
intent reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
D
David Campbell
Security and Policy Research Lab, Scale AI
Neil Kale
Neil Kale
Carnegie Mellon University
Machine LearningAI SafetyML for Healthcare
Udari Madhushani Sehwag
Udari Madhushani Sehwag
Research Scientist, Scale AI
Agentic AIAlignmentScalable oversightAI SafetyMulti-agent RL
B
Bert Herring
Security and Policy Research Lab, Scale AI
C
Christina Q Knight
Security and Policy Research Lab, Scale AI
D
Dan Borges
Security Engineering, Scale AI
N
Nick Price
Security Engineering, Scale AI
A
Alex Levinson
Security Engineering, Scale AI