CSR-Bench: A Benchmark for Evaluating the Cross-modal Safety and Reliability of MLLMs

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of multimodal large language models (MLLMs) to rely on unimodal shortcuts rather than genuine vision-language integration, posing significant safety and reliability risks. To this end, we propose CSR-Bench—the first fine-grained cross-modal safety evaluation benchmark specifically designed for joint image-text reasoning. It encompasses four interaction categories: safety violations, over-rejection, bias, and hallucination, and incorporates text-only controls to diagnose modality-induced behavioral shifts. Through adversarial example construction, paired controlled experiments, and 61 fine-grained labels, we systematically evaluate 16 prominent MLLMs, revealing critical alignment gaps—including weak safety awareness, strong language dominance, and performance degradation. Our analysis further indicates that current safety improvements largely stem from rejection-based heuristics and that a notable trade-off exists between safety and non-discriminatory behavior.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) enable interaction over both text and images, but their safety behavior can be driven by unimodal shortcuts instead of true joint intent understanding. We introduce CSR-Bench, a benchmark for evaluating cross-modal reliability through four stress-testing interaction patterns spanning Safety, Over-rejection, Bias, and Hallucination, covering 61 fine-grained types. Each instance is constructed to require integrated image-text interpretation, and we additionally provide paired text-only controls to diagnose modality-induced behavior shifts. We evaluate 16 state-of-the-art MLLMs and observe systematic cross-modal alignment gaps. Models show weak safety awareness, strong language dominance under interference, and consistent performance degradation from text-only controls to multimodal inputs. We also observe a clear trade-off between reducing over-rejection and maintaining safe, non-discriminatory behavior, suggesting that some apparent safety gains may come from refusal-oriented heuristics rather than robust intent understanding. WARNING: This paper contains unsafe contents.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Cross-modal Safety
Reliability
Modality Alignment
Safety Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal safety
multimodal reliability
modality-induced behavior shift
over-rejection trade-off
integrated image-text understanding
🔎 Similar Papers
No similar papers found.