LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language large models (LVLMs) struggle to generate coherent, high-fidelity multimodal content exceeding one thousand words due to the scarcity of long-output samples during supervised fine-tuning (SFT). To address this, we propose LongWriter: (1) the first SFT dataset for ultra-long multimodal generation—LongWriter-V-22k; (2) Iterative Direct Preference Optimization (IterDPO), a novel training strategy that drastically reduces human feedback overhead for lengthy outputs; and (3) MMLongBench-Write, the first benchmark dedicated to evaluating long-form multimodal generation. By integrating SFT, DPO, IterDPO, and multi-image–long-text alignment training, our 7B-parameter model surpasses larger closed-source models—including GPT-4o—on MMLongBench-Write, achieving, for the first time, ten-thousand-word-scale, high-fidelity, and strongly consistent image-text co-generation.

Technology Category

Application Category

📝 Abstract
Existing Large Vision-Language Models (LVLMs) can process inputs with context lengths up to 128k visual and text tokens, yet they struggle to generate coherent outputs beyond 1,000 words. We find that the primary limitation is the absence of long output examples during supervised fine-tuning (SFT). To tackle this issue, we introduce LongWriter-V-22k, a SFT dataset comprising 22,158 examples, each with multiple input images, an instruction, and corresponding outputs ranging from 0 to 10,000 words. Moreover, to achieve long outputs that maintain high-fidelity to the input images, we employ Direct Preference Optimization (DPO) to the SFT model. Given the high cost of collecting human feedback for lengthy outputs (e.g., 3,000 words), we propose IterDPO, which breaks long outputs into segments and uses iterative corrections to form preference pairs with the original outputs. Additionally, we develop MMLongBench-Write, a benchmark featuring six tasks to evaluate the long-generation capabilities of VLMs. Our 7B parameter model, trained with LongWriter-V-22k and IterDPO, achieves impressive performance on this benchmark, outperforming larger proprietary models like GPT-4o. Code and data: https://github.com/THU-KEG/LongWriter-V
Problem

Research questions and friction points this paper is trying to address.

Enhancing long coherent outputs in vision-language models.
Addressing fidelity in long outputs with iterative corrections.
Developing benchmarks for evaluating long-generation capabilities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervised Fine-Tuning dataset
Direct Preference Optimization
Iterative Correction method
🔎 Similar Papers
No similar papers found.