🤖 AI Summary
Diffusion models still face challenges in structural control, including limited flexibility and high inference overhead: ControlNet requires hand-crafted conditioning maps and full model retraining, while inversion-based methods suffer from inefficiency due to dual-path denoising. This paper proposes a training-free framework featuring single-step attention-based extraction and latent conditional disentanglement (LCD), which efficiently derives semantic–spatially aligned structural representations directly from input images and reuses them throughout denoising. By selecting optimal key timesteps and enabling implicit structural reuse, our method avoids fine-tuning, image inversion, and iterative extraction. With only ~5% additional computational cost, it achieves high-fidelity, structurally consistent generation, supports precise semantic layout control, and enables compositional scene design using multiple reference images. Extensive experiments demonstrate superior performance over baselines including ControlNet.
📝 Abstract
Controlling the spatial and semantic structure of diffusion-generated images remains a challenge. Existing methods like ControlNet rely on handcrafted condition maps and retraining, limiting flexibility and generalization. Inversion-based approaches offer stronger alignment but incur high inference cost due to dual-path denoising. We present FreeControl, a training-free framework for semantic structural control in diffusion models. Unlike prior methods that extract attention across multiple timesteps, FreeControl performs one-step attention extraction from a single, optimally chosen key timestep and reuses it throughout denoising. This enables efficient structural guidance without inversion or retraining. To further improve quality and stability, we introduce Latent-Condition Decoupling (LCD): a principled separation of the key timestep and the noised latent used in attention extraction. LCD provides finer control over attention quality and eliminates structural artifacts. FreeControl also supports compositional control via reference images assembled from multiple sources - enabling intuitive scene layout design and stronger prompt alignment. FreeControl introduces a new paradigm for test-time control, enabling structurally and semantically aligned, visually coherent generation directly from raw images, with the flexibility for intuitive compositional design and compatibility with modern diffusion models at approximately 5 percent additional cost.