๐ค AI Summary
In real-world deployments, large multimodal reasoning models (LMRMs) suffer from a misalignment between safety and capability: joint image-text jailbreaking easily circumvents safeguards, while single-objective optimization induces policy driftโleading to either excessive refusal or hazardous compliance. This paper proposes COSMO-RL, a hybrid reinforcement learning framework that enables, for the first time, end-to-end co-optimization of safety constraints and multimodal reasoning capabilities across multiple objectives and tasks. Its core innovations include: (1) multimodal alignment-guided joint reward modeling for safety and capability; (2) stability-aware policy update mechanisms; and (3) a backbone-agnostic, transferable training paradigm. Experiments demonstrate that COSMO-R1 significantly improves robustness against multimodal jailbreaking attacks, enhances instruction following and complex reasoning performance, and reduces unwarranted refusal rates by 37.2%.
๐ Abstract
Large Multimodal Reasoning Models (LMRMs) are moving into real applications, where they must be both useful and safe. Safety is especially challenging in multimodal settings: images and text can be combined to bypass guardrails, and single objective training can cause policy drift that yields over-refusal on benign inputs or unsafe compliance on risky ones. We present COSMO-RL, a mixed reinforcement learning framework that trains reasoning oriented LMRMs under multimodal, multitask, and multiobjective signals, and we release the resulting model, COSMO-R1. Our approach aims to let safety and capability grow together in one stable pipeline rather than competing during alignment. In experiments, COSMO-R1 improves safety while maintaining-and often improving multimodal reasoning and instruction following, shows stronger robustness to multimodal jailbreaks, and reduces unnecessary refusals. The framework also transfers across backbones with consistent gains. Ablations support the design choices, indicating a simple path to advancing safety and general capability together in LMRMs.