๐ค AI Summary
This work addresses the challenge of generating multi-frame, action-rich visual narratives in a zero-shot setting, where maintaining action semantic fidelity, subject identity consistency, and cross-frame background continuity simultaneously remains difficult. The authors propose an efficient pipeline that, given only a long textual prompt, a subject reference image, and bounding boxes, produces temporally coherent and identity-stable image sequences on a single RTX 4090 GPU. The method innovatively integrates three techniques: Gaussian-Centered Attention (GCA) to mitigate interference from overlapping bounding boxes, Action-Boosted Singular Value Reweighting (AB-SVR) to enhance action semantics, and Selective Forgetting Cache (SFC) to establish cross-frame semantic associations. Experiments show a 10โ15% improvement on the CLIP-T metric, superior DreamSim scores over strong baselines, competitive CLIP-I performance, and faster inference than FluxKontext, achieving both expressive visuals and stable scene progression.
๐ Abstract
Generating multi-frame, action-rich visual narratives without fine-tuning faces a threefold tension: action text faithfulness, subject identity fidelity, and cross-frame background continuity. We propose StoryTailor, a zero-shot pipeline that runs on a single RTX 4090 (24 GB) and produces temporally coherent, identity-preserving image sequences from a long narrative prompt, per-subject references, and grounding boxes. Three synergistic modules drive the system: Gaussian-Centered Attention (GCA) to dynamically focus on each subject core and ease grounding-box overlaps; Action-Boost Singular Value Reweighting (AB-SVR) to amplify action-related directions in the text embedding space; and Selective Forgetting Cache (SFC) that retains transferable background cues, forgets nonessential history, and selectively surfaces retained cues to build cross-scene semantic ties. Compared with baseline methods, experiments show that CLIP-T improves by up to 10-15%, with DreamSim lower than strong baselines, while CLIP-I stays in a visually acceptable, competitive range. With matched resolution and steps on a 24 GB GPU, inference is faster than FluxKontext. Qualitatively, StoryTailor delivers expressive interactions and evolving yet stable scenes.