🤖 AI Summary
This study reveals cross-risk conflicts—unintended degradation in one risk dimension (e.g., safety, fairness, or privacy) when optimizing defenses targeting another—commonly observed across large language models (LLMs).
Method: We introduce CrossRiskEval, the first interaction-aware evaluation framework, integrating 12 mainstream defense strategies, 14 defended models, and fine-grained task-based assessments. Leveraging neuron-level analysis, we identify “conflict-coupled neurons” exhibiting mutually exclusive sensitivity to multiple risks.
Results: Empirical findings demonstrate that safety-focused defenses exacerbate privacy leakage; fairness-enhancing methods increase abuse risk; and privacy-preserving techniques degrade both safety and fairness. These results challenge the prevailing paradigm of isolated, single-objective defense evaluation. CrossRiskEval establishes a foundation for systemic, interaction-aware assessment and provides theoretical insights and empirical evidence for designing robust, multi-objective defenses that jointly optimize safety, fairness, and privacy.
📝 Abstract
Large Language Models (LLMs) have shown remarkable performance across various applications, but their deployment in sensitive domains raises significant concerns. To mitigate these risks, numerous defense strategies have been proposed. However, most existing studies assess these defenses in isolation, overlooking their broader impacts across other risk dimensions. In this work, we take the first step in investigating unintended interactions caused by defenses in LLMs, focusing on the complex interplay between safety, fairness, and privacy. Specifically, we propose CrossRiskEval, a comprehensive evaluation framework to assess whether deploying a defense targeting one risk inadvertently affects others. Through extensive empirical studies on 14 defense-deployed LLMs, covering 12 distinct defense strategies, we reveal several alarming side effects: 1) safety defenses may suppress direct responses to sensitive queries related to bias or privacy, yet still amplify indirect privacy leakage or biased outputs; 2) fairness defenses increase the risk of misuse and privacy leakage; 3) privacy defenses often impair safety and exacerbate bias. We further conduct a fine-grained neuron-level analysis to uncover the underlying mechanisms of these phenomena. Our analysis reveals the existence of conflict-entangled neurons in LLMs that exhibit opposing sensitivities across multiple risk dimensions. Further trend consistency analysis at both task and neuron levels confirms that these neurons play a key role in mediating the emergence of unintended behaviors following defense deployment. We call for a paradigm shift in LLM risk evaluation, toward holistic, interaction-aware assessment of defense strategies.