Dr Genre: Reinforcement Learning from Decoupled LLM Feedback for Generic Text Rewriting

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly optimizing multiple objectives—factual consistency, stylistic adaptation, and conversational naturalness—in general-purpose text rewriting. We propose DrGenre, a decoupled reward learning framework that integrates task-oriented reward modeling, dynamic weighting, LLM-generated feedback, multi-task reward estimation, and instruction-aligned training to enable unified cross-task modeling. To support this research, we introduce ChatRewrite, a multi-source benchmark dataset encompassing diverse, naturally occurring instructions. Experiments demonstrate that DrGenre consistently outperforms state-of-the-art baselines across factual correction, style transfer, and email editing tasks. It significantly improves instruction adherence, intrinsic coherence, and conciseness, marking the first approach to achieve efficient, synergistic optimization of multi-objective rewriting capabilities.

Technology Category

Application Category

📝 Abstract
Generic text rewriting is a prevalent large language model (LLM) application that covers diverse real-world tasks, such as style transfer, fact correction, and email editing. These tasks vary in rewriting objectives (e.g., factual consistency vs. semantic preservation), making it challenging to develop a unified model that excels across all dimensions. Existing methods often specialize in either a single task or a specific objective, limiting their generalizability. In this work, we introduce a generic model proficient in factuality, stylistic, and conversational rewriting tasks. To simulate real-world user rewrite requests, we construct a conversational rewrite dataset, ChatRewrite, that presents ``natural''-sounding instructions, from raw emails using LLMs. Combined with other popular rewrite datasets, including LongFact for the factuality rewrite task and RewriteLM for the stylistic rewrite task, this forms a broad benchmark for training and evaluating generic rewrite models. To align with task-specific objectives, we propose Dr Genre, a Decoupled-reward learning framework for Generic rewriting, that utilizes objective-oriented reward models with a task-specific weighting. Evaluation shows that approach delivers higher-quality rewrites across all targeted tasks, improving objectives including instruction following (agreement), internal consistency (coherence), and minimal unnecessary edits (conciseness).
Problem

Research questions and friction points this paper is trying to address.

Develops a unified model for diverse text rewriting tasks.
Addresses challenges in generalizability across rewriting objectives.
Proposes a decoupled-reward framework for task-specific quality improvements.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled-reward learning for generic text rewriting
Utilizes objective-oriented reward models with task-specific weighting
Constructs ChatRewrite dataset for conversational rewrite tasks
🔎 Similar Papers
No similar papers found.