🤖 AI Summary
Text-to-image diffusion models frequently exhibit compositional failures—such as entity entanglement, attribute misalignment, and spatial localization errors—when prompted with multi-object, multi-attribute, or complex spatial-relational descriptions. To address this, we propose a lightweight region-level control mechanism: at the cross-attention logits layer of Stable Diffusion XL, we introduce learnable binary masks that sparsify attention connections between text tokens and latent features—without requiring auxiliary tokens, explicit positional encodings, or external segmentation masks. Our method preserves generation quality and diversity while substantially improving spatial layout accuracy and attribute-entity binding fidelity in multi-object scenes. Evaluated on multiple compositional generalization benchmarks—including Multi-Concept, Spatial-REL, and CLEVR-Ref+, it achieves state-of-the-art performance.
📝 Abstract
Text-to-image diffusion models achieve impressive realism but often suffer from compositional failures on prompts with multiple objects, attributes, and spatial relations, resulting in cross-token interference where entities entangle, attributes mix across objects, and spatial cues are violated. To address these failures, we propose MaskAttn-SDXL,a region-level gating mechanism applied to the cross-attention logits of Stable Diffusion XL(SDXL)'s UNet. MaskAttn-SDXL learns a binary mask per layer, injecting it into each cross-attention logit map before softmax to sparsify token-to-latent interactions so that only semantically relevant connections remain active. The method requires no positional encodings, auxiliary tokens, or external region masks, and preserves the original inference path with negligible overhead. In practice, our model improves spatial compliance and attribute binding in multi-object prompts while preserving overall image quality and diversity. These findings demonstrate that logit-level maksed cross-attention is an data-efficient primitve for enforcing compositional control, and our method thus serves as a practical extension for spatial control in text-to-image generation.