Language Models that Think, Chat Better

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient reasoning depth and the lack of verifiable supervision signals in open-domain tasks (e.g., outline generation, meal planning), this paper proposes RLMT: a reinforcement learning framework that directly optimizes base language models without supervised fine-tuning. Its core innovation extends verifiable reward mechanisms to unstructured chain-of-thought generation, enabling online RL with joint preference-based reward modeling—compatible with DPO, PPO, and GRPO. RLMT thus guides models toward high-quality, long-horizon reasoning. Experiments demonstrate that RLMT significantly improves generalization and reasoning depth, outperforming standard RLHF across multiple open-domain benchmarks. Notably, an 8B-parameter RLMT model surpasses GPT-4o on these tasks and matches Claude-3.7-Sonnet in dialogue and creative writing.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) improves language model reasoning by using rule-based rewards in verifiable domains such as mathematics and code. However, RLVR leads to limited generalization for open-ended tasks -- such as writing outline essays or making meal plans -- where humans reason routinely. This paper shows that the RLVR paradigm is effective beyond verifiable domains, and introduces **RL** with **M**odel-rewarded **T**hinking (**RLMT**) for general-purpose chat capabilities. Using diverse real-world prompts, RLMT requires LMs to generate long CoT reasoning before response, and optimizes them with online RL against a preference-based reward model used in RLHF. Across 40 training runs on Llama-3.1-8B and Qwen-2.5-7B (both base and instruct) and multiple optimization algorithms (DPO, PPO, and GRPO), RLMT consistently outperforms standard RLHF pipelines. This includes substantial gains of 3-7 points on three chat benchmarks (AlpacaEval2, WildBench, and ArenaHardV2), along with 1-3 point improvements on other tasks like creative writing and general knowledge. Our best 8B model surpasses GPT-4o in chat and creative writing and rivals Claude-3.7-Sonnet (Thinking). RLMT can also be applied directly to base models without an SFT stage, akin to R1-Zero training. Remarkably, with only 7K prompts, Llama-3.1-8B base trained with our RLMT recipe outperforms Llama-3.1-8B-Instruct post-trained with a complex multi-staged pipeline with 25M+ examples. We close with qualitative and quantitative analyses of how trained models plan their responses. Our results rethink the post-training pipeline and call upon future work to understand and employ thinking more broadly.
Problem

Research questions and friction points this paper is trying to address.

Improving language model reasoning for open-ended tasks beyond verifiable domains
Enhancing general-purpose chat capabilities through reinforcement learning with thinking
Optimizing language models to generate better responses via reasoning before answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLMT optimizes language models with online reinforcement learning
Uses preference-based reward models for general-purpose chat capabilities
Requires models to generate Chain-of-Thought reasoning before responses
🔎 Similar Papers
No similar papers found.