🤖 AI Summary
Existing custom video generation methods struggle to simultaneously preserve subject appearance fidelity and ensure temporal motion consistency, primarily due to the lack of object-level subject-motion disentangled modeling. This paper proposes a subject-motion representation disentanglement and alignment framework for text-to-video generation. We introduce the first object-level representation alignment mechanism, design a sparse spatiotemporal LoRA injection strategy to minimize fine-tuning interference, and develop a collaborative self-supervised subject encoder and optical-flow-based motion encoder. The method integrates self-supervised representation learning, optical-flow-driven motion modeling, efficient LoRA-based fine-tuning, and spatiotemporally sparse adapters. Evaluated on multiple benchmarks, it achieves significant improvements in subject similarity (+12.6%) and motion consistency (+9.8%), enabling fine-grained, disentangled controllable generation with both high visual fidelity and temporal stability.
📝 Abstract
Customized video generation aims to produce videos that faithfully preserve the subject's appearance from reference images while maintaining temporally consistent motion from reference videos. Existing methods struggle to ensure both subject appearance similarity and motion pattern consistency due to the lack of object-level guidance for subject and motion. To address this, we propose SMRABooth, which leverages the self-supervised encoder and optical flow encoder to provide object-level subject and motion representations. These representations are aligned with the model during the LoRA fine-tuning process. Our approach is structured in three core stages: (1) We exploit subject representations via a self-supervised encoder to guide subject alignment, enabling the model to capture overall structure of subject and enhance high-level semantic consistency. (2) We utilize motion representations from an optical flow encoder to capture structurally coherent and object-level motion trajectories independent of appearance. (3) We propose a subject-motion association decoupling strategy that applies sparse LoRAs injection across both locations and timing, effectively reducing interference between subject and motion LoRAs. Extensive experiments show that SMRABooth excels in subject and motion customization, maintaining consistent subject appearance and motion patterns, proving its effectiveness in controllable text-to-video generation.