MPO: Multilingual Safety Alignment via Reward Gap Optimization

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing safety alignment methods—such as RLHF and DPO—are predominantly monolingual and suffer significant performance degradation on noisy multilingual data, hindering the global safe deployment of large language models (LLMs). To address this, we propose a novel cross-lingual reward gap optimization paradigm anchored on English, introducing the first preference-learning-based reward modeling framework coupled with a cross-lingual reward discrepancy minimization mechanism. This enables lossless transfer of safety capabilities from English to multiple languages. Our approach integrates multilingual safety instruction tuning with a unified evaluation framework. Empirical results on LLaMA-3.1, Gemma-2, and Qwen2.5 demonstrate substantial improvements in safety alignment across eight languages, while preserving original language understanding and generation capabilities—without compromising multilingual utility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have become increasingly central to AI applications worldwide, necessitating robust multilingual safety alignment to ensure secure deployment across diverse linguistic contexts. Existing preference learning methods for safety alignment, such as RLHF and DPO, are primarily monolingual and struggle with noisy multilingual data. To address these limitations, we introduce Multilingual reward gaP Optimization (MPO), a novel approach that leverages the well-aligned safety capabilities of the dominant language (English) to improve safety alignment across multiple languages. MPO directly minimizes the reward gap difference between the dominant language and target languages, effectively transferring safety capabilities while preserving the original strengths of the dominant language. Extensive experiments on three LLMs, LLaMA-3.1, Gemma-2 and Qwen2.5, validate MPO's efficacy in multilingual safety alignment without degrading general multilingual utility.
Problem

Research questions and friction points this paper is trying to address.

Ensures robust multilingual safety alignment for LLMs
Addresses noisy multilingual data in preference learning methods
Transfers English safety capabilities to other languages effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes reward gap for multilingual safety
Transfers English safety to other languages
Preserves original language strengths effectively
🔎 Similar Papers
No similar papers found.