Diff-Aid: Inference-time Adaptive Interaction Denoising for Rectified Text-to-Image Generation

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image diffusion models often suffer from poor semantic alignment under complex prompts due to insufficient interaction between textual and visual features. To address this, this work proposes Diff-Aid—a lightweight, plug-and-play inference-time adaptive modulation method that, for the first time, enables dynamic token-level control of text–image interactions across Transformer modules and denoising timesteps. Requiring no additional training, Diff-Aid significantly enhances semantic consistency, visual quality, and human preference in generated images. It demonstrates strong compatibility with powerful baselines such as SD 3.5 and FLUX, effectively supporting diverse downstream applications including style LoRAs, controllable generation, and zero-shot editing.

Technology Category

Application Category

📝 Abstract
Recent text-to-image (T2I) diffusion models have achieved remarkable advancement, yet faithfully following complex textual descriptions remains challenging due to insufficient interactions between textual and visual features. Prior approaches enhance such interactions via architectural design or handcrafted textual condition weighting, but lack flexibility and overlook the dynamic interactions across different blocks and denoising stages. To provide a more flexible and efficient solution to this problem, we propose Diff-Aid, a lightweight inference-time method that adaptively adjusts per-token text and image interactions across transformer blocks and denoising timesteps. Beyond improving generation quality, Diff-Aid yields interpretable modulation patterns that reveal how different blocks, timesteps, and textual tokens contribute to semantic alignment during denoising. As a plug-and-play module, Diff-Aid can be seamlessly integrated into downstream applications for further improvement, including style LoRAs, controllable generation, and zero-shot editing. Experiments on strong baselines (SD 3.5 and FLUX) demonstrate consistent improvements in prompt adherence, visual quality, and human preference across various metrics. Our code and models will be released.
Problem

Research questions and friction points this paper is trying to address.

text-to-image generation
diffusion models
semantic alignment
inference-time adaptation
cross-modal interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive interaction denoising
text-to-image generation
inference-time modulation
semantic alignment
plug-and-play module
🔎 Similar Papers
No similar papers found.
B
Binglei Li
1Fudan University 2Shanghai Innovation Institute 3Shanghai Academy of AI for Science
Mengping Yang
Mengping Yang
East China University of Science and Technology
Few-shot LearningGenerative Models
Z
Zhiyu Tan
1Fudan University 3Shanghai Academy of AI for Science
J
Junping Zhang
1Fudan University
Hao Li
Hao Li
FUDAN UNIVERSITY,DAMO@ALIBABA
Computer VisionDeep LearningAI4S