LSSGen: Leveraging Latent Space Scaling in Flow and Diffusion for Efficient Text to Image Generation

๐Ÿ“… 2025-07-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional text-to-image generation methods perform resolution scaling in pixel space via downsampling/upsampling, which often introduces artifacts, while latent-space re-encoding degrades image quality. To address these issues, we propose the first framework that performs resolution scaling entirely within the latent spaceโ€”without modifying the U-Net or Transformer backbone architectures. Our method synergistically integrates flow matching and diffusion modeling, and introduces a lightweight latent upsampler enabling flexible multi-resolution generation. Extensive experiments demonstrate that, at comparable sampling speeds, our approach achieves up to a 246% improvement in TOPIQ score for 1024ยฒ images, significantly enhancing text-image alignment and perceptual quality while effectively mitigating image degradation.

Technology Category

Application Category

๐Ÿ“ Abstract
Flow matching and diffusion models have shown impressive results in text-to-image generation, producing photorealistic images through an iterative denoising process. A common strategy to speed up synthesis is to perform early denoising at lower resolutions. However, traditional methods that downscale and upscale in pixel space often introduce artifacts and distortions. These issues arise when the upscaled images are re-encoded into the latent space, leading to degraded final image quality. To address this, we propose {f Latent Space Scaling Generation (LSSGen)}, a framework that performs resolution scaling directly in the latent space using a lightweight latent upsampler. Without altering the Transformer or U-Net architecture, LSSGen improves both efficiency and visual quality while supporting flexible multi-resolution generation. Our comprehensive evaluation covering text-image alignment and perceptual quality shows that LSSGen significantly outperforms conventional scaling approaches. When generating $1024^2$ images at similar speeds, it achieves up to 246% TOPIQ score improvement.
Problem

Research questions and friction points this paper is trying to address.

Improves efficiency in text-to-image generation
Reduces artifacts from pixel space scaling
Enhances multi-resolution image quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space scaling for efficient generation
Lightweight latent upsampler improves quality
Multi-resolution support without architecture changes
๐Ÿ”Ž Similar Papers
No similar papers found.