VASCAR: Content-Aware Layout Generation via Visual-Aware Self-Correction

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing layout generation methods—including large language models (LLMs)—exhibit limited performance on content-aware tasks (e.g., web or poster layout) due to their lack of native visual perception capability. To address this, we propose the first zero-shot, iterative self-correction framework for layout generation based on Large Vision-Language Models (LVLMs). Our method renders text-generated initial layouts as visualization images with color-coded bounding boxes, feeds them into an LVLM (e.g., Gemini) for multimodal understanding, and elicits textual feedback—establishing a “text → image → text” cross-modal optimization loop. Crucially, it requires no fine-tuning and is the first to directly leverage LVLMs’ intrinsic visual reasoning capabilities within the layout generation pipeline. Experiments demonstrate that our approach comprehensively outperforms existing specialized models and LLM-based baselines under zero-shot settings, achieving new state-of-the-art layout quality across standard metrics.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have proven effective for layout generation due to their ability to produce structure-description languages, such as HTML or JSON, even without access to visual information. Recently, LLM providers have evolved these models into large vision-language models (LVLM), which shows prominent multi-modal understanding capabilities. Then, how can we leverage this multi-modal power for layout generation? To answer this, we propose Visual-Aware Self-Correction LAyout GeneRation (VASCAR) for LVLM-based content-aware layout generation. In our method, LVLMs iteratively refine their outputs with reference to rendered layout images, which are visualized as colored bounding boxes on poster backgrounds. In experiments, we demonstrate that our method combined with the Gemini. Without any additional training, VASCAR achieves state-of-the-art (SOTA) layout generation quality outperforming both existing layout-specific generative models and other LLM-based methods.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack visual perception for content-aware layout generation.
Explore LVLMs for improved content-aware layout generation tasks.
Propose VASCAR for iterative, visual-aware layout refinement.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large vision-language models for layout generation
Implements Visual-Aware Self-Correction (VASCAR) technique
Iteratively refines outputs using rendered layout images
🔎 Similar Papers
No similar papers found.