MaskAttn-SDXL: Controllable Region-Level Text-To-Image Generation

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models frequently exhibit compositional failures—such as entity entanglement, attribute misalignment, and spatial localization errors—when prompted with multi-object, multi-attribute, or complex spatial-relational descriptions. To address this, we propose a lightweight region-level control mechanism: at the cross-attention logits layer of Stable Diffusion XL, we introduce learnable binary masks that sparsify attention connections between text tokens and latent features—without requiring auxiliary tokens, explicit positional encodings, or external segmentation masks. Our method preserves generation quality and diversity while substantially improving spatial layout accuracy and attribute-entity binding fidelity in multi-object scenes. Evaluated on multiple compositional generalization benchmarks—including Multi-Concept, Spatial-REL, and CLEVR-Ref+, it achieves state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models achieve impressive realism but often suffer from compositional failures on prompts with multiple objects, attributes, and spatial relations, resulting in cross-token interference where entities entangle, attributes mix across objects, and spatial cues are violated. To address these failures, we propose MaskAttn-SDXL,a region-level gating mechanism applied to the cross-attention logits of Stable Diffusion XL(SDXL)'s UNet. MaskAttn-SDXL learns a binary mask per layer, injecting it into each cross-attention logit map before softmax to sparsify token-to-latent interactions so that only semantically relevant connections remain active. The method requires no positional encodings, auxiliary tokens, or external region masks, and preserves the original inference path with negligible overhead. In practice, our model improves spatial compliance and attribute binding in multi-object prompts while preserving overall image quality and diversity. These findings demonstrate that logit-level maksed cross-attention is an data-efficient primitve for enforcing compositional control, and our method thus serves as a practical extension for spatial control in text-to-image generation.
Problem

Research questions and friction points this paper is trying to address.

Addresses cross-token interference in multi-object text-to-image generation
Solves attribute mixing and spatial relation violations in diffusion models
Improves compositional control without external masks or positional encodings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Region-level gating mechanism
Sparsify token-latent interactions
Masked cross-attention logits
🔎 Similar Papers
No similar papers found.