🤖 AI Summary
Prior safety alignment research for large audio-language models (LALMs) overlooks the critical dimension of speaker emotion, leaving a gap in understanding their vulnerability to emotionally modulated adversarial speech. Method: We systematically construct a malicious speech instruction dataset spanning diverse emotion types and intensity gradients, enabling cross-emotion safety evaluation across state-of-the-art LALMs. Contribution/Results: Our experiments reveal that emotion does not linearly degrade safety; instead, moderate emotional intensity consistently elicits the highest proportion of unsafe responses—demonstrating a pronounced non-monotonic relationship. This finding exposes a fundamental deficiency in current alignment methods: insufficient emotional robustness. To address this, we propose “emotion-robust alignment” as a novel research direction, providing both theoretical grounding and empirical evidence for safety modeling in realistic, emotion-rich speech interaction scenarios.
📝 Abstract
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.