SuperWriter: Reflection-Driven Long-Form Generation with Large Language Models

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak coherence and logical inconsistency in long-text generation by large language models (LLMs), this paper proposes an agent framework endowed with explicit planning and multi-stage reflective capabilities. Methodologically: (1) it introduces a hierarchical direct preference optimization (DPO) mechanism integrated with Monte Carlo tree search (MCTS), where terminal-quality signals are backpropagated via MCTS to refine intermediate decisions; (2) it incorporates a structured, professional-writer-inspired reasoning guidance paradigm to enable dynamic calibration during generation. Evaluated on multiple long-text benchmarks using a 7B-parameter model, the approach achieves state-of-the-art (SOTA) performance—outperforming larger baseline models in both automated and human evaluations. The primary contributions are the first hierarchical DPO-MCTS co-optimization mechanism and a reflective, interpretable, and human-intervention-enabled generation architecture.

Technology Category

Application Category

📝 Abstract
Long-form text generation remains a significant challenge for large language models (LLMs), particularly in maintaining coherence, ensuring logical consistency, and preserving text quality as sequence length increases. To address these limitations, we propose SuperWriter-Agent, an agent-based framework designed to enhance the quality and consistency of long-form text generation. SuperWriter-Agent introduces explicit structured thinking-through planning and refinement stages into the generation pipeline, guiding the model to follow a more deliberate and cognitively grounded process akin to that of a professional writer. Based on this framework, we construct a supervised fine-tuning dataset to train a 7B SuperWriter-LM. We further develop a hierarchical Direct Preference Optimization (DPO) procedure that uses Monte Carlo Tree Search (MCTS) to propagate final quality assessments and optimize each generation step accordingly. Empirical results across diverse benchmarks demonstrate that SuperWriter-LM achieves state-of-the-art performance, surpassing even larger-scale baseline models in both automatic evaluation and human evaluation. Furthermore, comprehensive ablation studies demonstrate the effectiveness of hierarchical DPO and underscore the value of incorporating structured thinking steps to improve the quality of long-form text generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing coherence in long-form text generation by LLMs
Improving logical consistency as sequence length increases
Preserving text quality through structured thinking steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-based framework for long-form text generation
Hierarchical DPO with Monte Carlo Tree Search
Structured thinking steps in generation pipeline
🔎 Similar Papers
No similar papers found.