Extrapolating and Decoupling Image-to-Video Generation Models: Motion Modeling is Easier Than You Think

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image-to-video (I2V) generation faces a fundamental trade-off between motion controllability and dynamic richness: existing methods suffer from limited motion magnitude, poor text-video alignment, or low visual fidelity. To address this, we propose the first I2V framework unifying extrapolation and disentanglement principles, introducing a novel three-stage paradigm: (1) a lightweight text adapter enabling fine-grained motion control; (2) a training-free temporal extrapolation strategy that substantially amplifies motion magnitude; and (3) temporally adaptive parameter injection to jointly optimize controllability and dynamics. Built upon diffusion models, our approach integrates temporal module adaptation, zero-shot extrapolation, and motion-aware parameter disentanglement. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant gains in motion naturalness, text-video alignment, and dynamic richness—enabling high-quality, high-fidelity, and highly controllable I2V generation.

Technology Category

Application Category

📝 Abstract
Image-to-Video (I2V) generation aims to synthesize a video clip according to a given image and condition (e.g., text). The key challenge of this task lies in simultaneously generating natural motions while preserving the original appearance of the images. However, current I2V diffusion models (I2V-DMs) often produce videos with limited motion degrees or exhibit uncontrollable motion that conflicts with the textual condition. To address these limitations, we propose a novel Extrapolating and Decoupling framework, which introduces model merging techniques to the I2V domain for the first time. Specifically, our framework consists of three separate stages: (1) Starting with a base I2V-DM, we explicitly inject the textual condition into the temporal module using a lightweight, learnable adapter and fine-tune the integrated model to improve motion controllability. (2) We introduce a training-free extrapolation strategy to amplify the dynamic range of the motion, effectively reversing the fine-tuning process to enhance the motion degree significantly. (3) With the above two-stage models excelling in motion controllability and degree, we decouple the relevant parameters associated with each type of motion ability and inject them into the base I2V-DM. Since the I2V-DM handles different levels of motion controllability and dynamics at various denoising time steps, we adjust the motion-aware parameters accordingly over time. Extensive qualitative and quantitative experiments have been conducted to demonstrate the superiority of our framework over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Enhance motion controllability in image-to-video generation.
Amplify motion range without additional training.
Decouple and optimize motion parameters for better video synthesis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model merging techniques in I2V domain
Lightweight adapter for motion controllability
Training-free extrapolation for motion enhancement
🔎 Similar Papers
No similar papers found.
Jie Tian
Jie Tian
New Jersey Institute of Technology
Wireless Sensor NetworkAd hoc Sensor NetworkCloud Computing
Xiaoye Qu
Xiaoye Qu
Shanghai AI Lab
Z
Zhenyi Lu
School of Computer Science & Technology, Huazhong University of Science and Technology
W
Wei Wei
School of Computer Science & Technology, Huazhong University of Science and Technology
Sichen Liu
Sichen Liu
MS Student, Huazhong University of Science and Technology
Generative ModelImage Generation
Y
Yu Cheng
The Chinese University of Hong Kong