🤖 AI Summary
Current video generation models face bottlenecks in evaluation granularity and reward modeling—particularly regarding instruction alignment, factual consistency, safety, and fairness. To address this, we introduce MJ-BENCH-VIDEO, the first five-dimensional fine-grained video preference benchmark comprising 28 metrics across alignment, safety, fidelity, temporal coherence, and bias/fairness. We further propose MJ-VIDEO, a Mixture-of-Experts (MoE)-based video reward model that enables text-video joint representation learning, input-aware expert routing, and adaptive preference scoring. Unlike conventional single-router or globally weighted architectures, MJ-VIDEO dynamically activates experts conditioned on input semantics. On MJ-BENCH-VIDEO, MJ-VIDEO achieves +17.58% improvement in overall preference judgment and +15.87% in fine-grained metric-level assessment over strong baselines, while significantly enhancing semantic alignment between generated videos and textual instructions.
📝 Abstract
Recent advancements in video generation have significantly improved the ability to synthesize videos from text instructions. However, existing models still struggle with key challenges such as instruction misalignment, content hallucination, safety concerns, and bias. Addressing these limitations, we introduce MJ-BENCH-VIDEO, a large-scale video preference benchmark designed to evaluate video generation across five critical aspects: Alignment, Safety, Fineness, Coherence&Consistency, and Bias&Fairness. This benchmark incorporates 28 fine-grained criteria to provide a comprehensive evaluation of video preference. Building upon this dataset, we propose MJ-VIDEO, a Mixture-of-Experts (MoE)-based video reward model designed to deliver fine-grained reward. MJ-VIDEO can dynamically select relevant experts to accurately judge the preference based on the input text-video pair. This architecture enables more precise and adaptable preference judgments. Through extensive benchmarking on MJ-BENCH-VIDEO, we analyze the limitations of existing video reward models and demonstrate the superior performance of MJ-VIDEO in video preference assessment, achieving 17.58% and 15.87% improvements in overall and fine-grained preference judgments, respectively. Additionally, introducing MJ-VIDEO for preference tuning in video generation enhances the alignment performance.