COSMO-RL: Towards Trustworthy LMRMs via Joint Safety and Stability

๐Ÿ“… 2025-10-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In real-world deployments, large multimodal reasoning models (LMRMs) suffer from a misalignment between safety and capability: joint image-text jailbreaking easily circumvents safeguards, while single-objective optimization induces policy driftโ€”leading to either excessive refusal or hazardous compliance. This paper proposes COSMO-RL, a hybrid reinforcement learning framework that enables, for the first time, end-to-end co-optimization of safety constraints and multimodal reasoning capabilities across multiple objectives and tasks. Its core innovations include: (1) multimodal alignment-guided joint reward modeling for safety and capability; (2) stability-aware policy update mechanisms; and (3) a backbone-agnostic, transferable training paradigm. Experiments demonstrate that COSMO-R1 significantly improves robustness against multimodal jailbreaking attacks, enhances instruction following and complex reasoning performance, and reduces unwarranted refusal rates by 37.2%.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Multimodal Reasoning Models (LMRMs) are moving into real applications, where they must be both useful and safe. Safety is especially challenging in multimodal settings: images and text can be combined to bypass guardrails, and single objective training can cause policy drift that yields over-refusal on benign inputs or unsafe compliance on risky ones. We present COSMO-RL, a mixed reinforcement learning framework that trains reasoning oriented LMRMs under multimodal, multitask, and multiobjective signals, and we release the resulting model, COSMO-R1. Our approach aims to let safety and capability grow together in one stable pipeline rather than competing during alignment. In experiments, COSMO-R1 improves safety while maintaining-and often improving multimodal reasoning and instruction following, shows stronger robustness to multimodal jailbreaks, and reduces unnecessary refusals. The framework also transfers across backbones with consistent gains. Ablations support the design choices, indicating a simple path to advancing safety and general capability together in LMRMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal model safety against adversarial inputs
Preventing policy drift in single-objective training systems
Balancing safety with reasoning capability in alignment processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed reinforcement learning for multimodal reasoning models
Joint training on safety and stability objectives
Transferable framework across model backbones
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yizhuo Ding
Fudan University
M
Mingkang Chen
Shanghai AI Laboratory
Q
Qiuhua Liu
Shenzhen University
F
Fenghua Weng
ShanghaiTech University
W
Wanying Qu
Fudan University
Y
Yue Yang
Shanghai AI Laboratory
Y
Yugang Jiang
Fudan University
Zuxuan Wu
Zuxuan Wu
Fudan University
Yanwei Fu
Yanwei Fu
Fudan University
Computer visionmachine learningMultimedia
Wenqi Shao
Wenqi Shao
Researcher at Shanghai AI Laboratory
Foundation Model EvaluationLLM CompressionEfficient AdaptationMultimodal Learning