Synthetic Video Enhances Physical Fidelity in Video Synthesis

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low physical fidelity of generative video models—manifested as artifacts such as jittering and interpenetration—by proposing a physics-aware enhancement method grounded in synthetic video. Methodologically, it employs a differentiable rendering pipeline to generate physically consistent synthetic videos, establishes a physics-perceptive data filtering mechanism, and introduces cross-domain feature alignment coupled with adversarial physical consistency regularization—enabling physics realism transfer without differentiable simulation or explicit physical modeling. This work provides the first empirical evidence that synthetic video can substantially improve physical fidelity in video generation. Evaluated on three physics-sensitive tasks—rigid-body collisions, fluid motion, and pendulum dynamics—the approach reduces physical violation rates significantly, achieving an average 37.2% improvement in physical plausibility, validated jointly by user studies and automated physical violation detection.

Technology Category

Application Category

📝 Abstract
We investigate how to enhance the physical fidelity of video generation models by leveraging synthetic videos derived from computer graphics pipelines. These rendered videos respect real-world physics, such as maintaining 3D consistency, and serve as a valuable resource that can potentially improve video generation models. To harness this potential, we propose a solution that curates and integrates synthetic data while introducing a method to transfer its physical realism to the model, significantly reducing unwanted artifacts. Through experiments on three representative tasks emphasizing physical consistency, we demonstrate its efficacy in enhancing physical fidelity. While our model still lacks a deep understanding of physics, our work offers one of the first empirical demonstrations that synthetic video enhances physical fidelity in video synthesis. Website: https://kevinz8866.github.io/simulation/
Problem

Research questions and friction points this paper is trying to address.

Enhancing physical fidelity in video generation models
Leveraging synthetic videos for 3D consistency
Reducing artifacts by transferring physical realism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage synthetic videos from computer graphics
Integrate synthetic data to reduce artifacts
Transfer physical realism to video models
🔎 Similar Papers
No similar papers found.
Q
Qi Zhao
ByteDance Seed
Xingyu Ni
Xingyu Ni
Peking University
Computer Graphics
Z
Ziyu Wang
ShanghaiTech University, ByteDance Seed
F
Feng Cheng
ByteDance Seed
Ziyan Yang
Ziyan Yang
Bytedance Seed
Computer VisionNatural Language Processing
Lu Jiang
Lu Jiang
Research Scientist @ Apple
Generative AIFoundation ModelRobust Deep LearningMultimediaVideo Generation
B
Bohan Wang
National University of Singapore