🤖 AI Summary
Existing text-to-image diffusion models often suffer from poor semantic alignment under complex prompts due to insufficient interaction between textual and visual features. To address this, this work proposes Diff-Aid—a lightweight, plug-and-play inference-time adaptive modulation method that, for the first time, enables dynamic token-level control of text–image interactions across Transformer modules and denoising timesteps. Requiring no additional training, Diff-Aid significantly enhances semantic consistency, visual quality, and human preference in generated images. It demonstrates strong compatibility with powerful baselines such as SD 3.5 and FLUX, effectively supporting diverse downstream applications including style LoRAs, controllable generation, and zero-shot editing.
📝 Abstract
Recent text-to-image (T2I) diffusion models have achieved remarkable advancement, yet faithfully following complex textual descriptions remains challenging due to insufficient interactions between textual and visual features. Prior approaches enhance such interactions via architectural design or handcrafted textual condition weighting, but lack flexibility and overlook the dynamic interactions across different blocks and denoising stages. To provide a more flexible and efficient solution to this problem, we propose Diff-Aid, a lightweight inference-time method that adaptively adjusts per-token text and image interactions across transformer blocks and denoising timesteps. Beyond improving generation quality, Diff-Aid yields interpretable modulation patterns that reveal how different blocks, timesteps, and textual tokens contribute to semantic alignment during denoising. As a plug-and-play module, Diff-Aid can be seamlessly integrated into downstream applications for further improvement, including style LoRAs, controllable generation, and zero-shot editing. Experiments on strong baselines (SD 3.5 and FLUX) demonstrate consistent improvements in prompt adherence, visual quality, and human preference across various metrics. Our code and models will be released.