MOA: Multi-Objective Alignment for Role-Playing Agents

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Role-playing agents (RPAs) require joint optimization of instruction following, domain knowledge, and linguistic style consistency; however, existing supervised fine-tuning suffers from overfitting, while reinforcement learning struggles to balance multi-dimensional objectives. To address this, we propose the first multi-dimensional fine-grained alignment framework tailored for RPAs. Our method introduces a multi-objective co-optimization strategy and a chain-of-thought (CoT)-enhanced off-policy rollout mechanism, enabling simultaneous improvement across multiple quality dimensions while preserving output diversity. It leverages fine-grained scoring criteria, integrates CoT-guided sampling, and employs off-policy optimization. Evaluated on PersonaGym and RoleMRC benchmarks, our 8B model matches or surpasses GPT-4o and Claude across all metrics—achieving significant gains in role knowledge accuracy, persona style consistency, scenario generalization, and multi-turn dialogue coherence.

Technology Category

Application Category

📝 Abstract
Role-playing agents (RPAs) must simultaneously master many conflicting skills -- following multi-turn instructions, exhibiting domain knowledge, and adopting a consistent linguistic style. Existing work either relies on supervised fine-tuning (SFT) that over-fits surface cues and yields low diversity, or applies reinforcement learning (RL) that fails to learn multiple dimensions for comprehensive RPA optimization. We present MOA (Multi-Objective Alignment), a reinforcement-learning framework that enables multi-dimensional, fine-grained rubric optimization for general RPAs. MOA introduces a novel multi-objective optimization strategy that trains simultaneously on multiple fine-grained rubrics to boost optimization performance. Besides, to address the issues of model output diversity and quality, we have also employed thought-augmented rollout with off-policy guidance. Extensive experiments on challenging benchmarks such as PersonaGym and RoleMRC show that MOA enables an 8B model to match or even outperform strong baselines such as GPT-4o and Claude across numerous dimensions. This demonstrates the great potential of MOA in building RPAs that can simultaneously meet the demands of role knowledge, persona style, diverse scenarios, and complex multi-turn conversations.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multiple conflicting skills in role-playing agents
Addressing over-fitting and low diversity in existing methods
Enhancing model performance across diverse conversational dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective reinforcement learning for role-playing agents
Thought-augmented rollout with off-policy guidance
Simultaneous fine-grained rubric optimization for multiple skills
🔎 Similar Papers
No similar papers found.
C
Chonghua Liao
Tsinghua University
K
Ke Wang
Tongyi Lab
Yuchuan Wu
Yuchuan Wu
Alibaba Tongyi Lab(通义实验室)
Conversational AILarge Language ModelsSocial Intelligence
F
Fei Huang
Tongyi Lab
Y
Yongbin Li
Tongyi Lab