🤖 AI Summary
Real-world reasoning requires integrating cognitive capabilities across mathematics, programming, and logic; however, existing work predominantly focuses on single-domain learning and lacks systematic investigation of cross-domain interaction mechanisms—such as knowledge transfer, interference, and synergy—within reinforcement learning frameworks.
Method: We propose a data-centric multi-domain fusion analysis framework to systematically characterize cross-domain generalization and interference. Using GRPO, we conduct both single- and multi-domain joint training on Qwen-2.5-7B, comparing base and instruction-tuned models under verifiable reward signals. We quantitatively evaluate the impact of supervised fine-tuning (SFT), curriculum learning, and reward design on multi-domain performance.
Results: Empirical results show that judicious multi-domain data composition significantly enhances generalization, revealing strong positive transfer (e.g., math → code). We further identify, for the first time, reward imbalance as a critical cause of performance degradation and provide a verifiable optimization pathway for multi-domain alignment.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for enhancing the reasoning capabilities of LLMs. Existing research has predominantly concentrated on isolated reasoning domains such as mathematical problem-solving, coding tasks, or logical reasoning. However, real world reasoning scenarios inherently demand an integrated application of multiple cognitive skills. Despite this, the interplay among these reasoning skills under reinforcement learning remains poorly understood. To bridge this gap, we present a systematic investigation of multi-domain reasoning within the RLVR framework, explicitly focusing on three primary domains: mathematical reasoning, code generation, and logical puzzle solving. We conduct a comprehensive study comprising four key components: (1) Leveraging the GRPO algorithm and the Qwen-2.5-7B model family, our study thoroughly evaluates the models' in-domain improvements and cross-domain generalization capabilities when trained on single-domain datasets. (2) Additionally, we examine the intricate interactions including mutual enhancements and conflicts that emerge during combined cross-domain training. (3) To further understand the influence of SFT on RL, we also analyze and compare performance differences between base and instruct models under identical RL configurations. (4) Furthermore, we delve into critical RL training details, systematically exploring the impacts of curriculum learning strategies, variations in reward design, and language-specific factors. Through extensive experiments, our results offer significant insights into the dynamics governing domain interactions, revealing key factors influencing both specialized and generalizable reasoning performance. These findings provide valuable guidance for optimizing RL methodologies to foster comprehensive, multi-domain reasoning capabilities in LLMs.