🤖 AI Summary
Existing VAE-based latent diffusion models suffer from inefficient training, slow inference, and poor cross-task transferability—rooted in the lack of semantic disentanglement and discriminative structure in VAE latent spaces. This paper proposes SVG, the first framework to construct a semantically structured latent space *without* relying on VAEs: it freezes the DINO backbone to extract discriminative semantic features and couples a lightweight residual branch to model fine-grained details; diffusion models are then trained end-to-end in this structured space. SVG significantly improves training efficiency and sampling speed—enabling high-fidelity, semantically consistent generation in few denoising steps. Experiments demonstrate that SVG outperforms VAE-based baselines in generation quality, inference acceleration, and downstream task transferability. By decoupling semantic structure learning from probabilistic latent modeling, SVG establishes a new paradigm for general-purpose, high-quality visual representation learning.
📝 Abstract
Recent progress in diffusion-based visual generation has largely relied on latent diffusion models with variational autoencoders (VAEs). While effective for high-fidelity synthesis, this VAE+diffusion paradigm suffers from limited training efficiency, slow inference, and poor transferability to broader vision tasks. These issues stem from a key limitation of VAE latent spaces: the lack of clear semantic separation and strong discriminative structure. Our analysis confirms that these properties are crucial not only for perception and understanding tasks, but also for the stable and efficient training of latent diffusion models. Motivated by this insight, we introduce SVG, a novel latent diffusion model without variational autoencoders, which leverages self-supervised representations for visual generation. SVG constructs a feature space with clear semantic discriminability by leveraging frozen DINO features, while a lightweight residual branch captures fine-grained details for high-fidelity reconstruction. Diffusion models are trained directly on this semantically structured latent space to facilitate more efficient learning. As a result, SVG enables accelerated diffusion training, supports few-step sampling, and improves generative quality. Experimental results further show that SVG preserves the semantic and discriminative capabilities of the underlying self-supervised representations, providing a principled pathway toward task-general, high-quality visual representations.