🤖 AI Summary
Large Audio-Language Models (LALMs) exhibit heightened susceptibility to generating harmful responses when processing audio inputs. Existing safety alignment approaches face two key bottlenecks: (1) LLM-based textual guidance fails due to cross-modal discrepancies in activation distributions; and (2) prompt-based defenses induce excessive rejection of benign speech. This paper introduces the first inference-time safety alignment framework tailored for LALMs, integrating text-guided fine-grained rejection control with safety subspace decoupling and ablation—enabling dynamic, precise response filtering without modifying raw audio. By explicitly addressing cross-modal control misalignment, our method significantly improves harmful query detection (+23.6% interception rate) while reducing false rejection of benign utterances to only 1.2%, outperforming all existing baselines across both safety and utility metrics.
📝 Abstract
Large Audio-Language Models (LALMs) are becoming essential as a powerful multimodal backbone for real-world applications. However, recent studies show that audio inputs can more easily elicit harmful responses than text, exposing new risks toward deployment. While safety alignment has made initial advances in LLMs and Large Vision-Language Models (LVLMs), we find that vanilla adaptation of these approaches to LALMs faces two key limitations: 1) LLM-based steering fails under audio input due to the large distributional gap between activations, and 2) prompt-based defenses induce over-refusals on benign-speech queries. To address these challenges, we propose Safe-Ablated Refusal Steering (SARSteer), the first inference-time defense framework for LALMs. Specifically, SARSteer leverages text-derived refusal steering to enforce rejection without manipulating audio inputs and introduces decomposed safe-space ablation to mitigate over-refusal. Extensive experiments demonstrate that SARSteer significantly improves harmful-query refusal while preserving benign responses, establishing a principled step toward safety alignment in LALMs.