CBIL: Collective Behavior Imitation Learning for Fish from Real Videos

๐Ÿ“… 2024-11-19
๐Ÿ›๏ธ ACM Transactions on Graphics
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of modeling collective behaviors in high-density, irregular fish schools. We propose the first end-to-end, video-driven self-supervised imitation learning framework that learns spatiotemporal motion patterns directly from raw videosโ€”without requiring ground-truth trajectory annotations. Methodologically, the framework integrates a Masked Video Autoencoder (MVAE) with self-supervised video representation learning to extract robust spatiotemporal features; introduces an adversarial latent-space motion distribution matching mechanism; and incorporates biologically inspired reward functions and prior motion constraints to enhance training stability and behavioral plausibility. Experiments demonstrate substantial improvements in behavioral diversity and visual fidelity of generated motions. The framework generalizes effectively to multi-species animated synthesis and enables automatic detection of anomalous schooling behaviors in field-captured videos.

Technology Category

Application Category

๐Ÿ“ Abstract
Reproducing realistic collective behaviors presents a captivating yet formidable challenge. Traditional rule-based methods rely on hand-crafted principles, limiting motion diversity and realism in generated collective behaviors. Recent imitation learning methods learn from data but often require ground-truth motion trajectories and struggle with authenticity, especially in high-density groups with erratic movements. In this paper, we present a scalable approach, Collective Behavior Imitation Learning (CBIL), for learning fish schooling behavior directly from videos , without relying on captured motion trajectories. Our method first leverages Video Representation Learning, in which a Masked Video AutoEncoder (MVAE) extracts implicit states from video inputs in a self-supervised manner. The MVAE effectively maps 2D observations to implicit states that are compact and expressive for following the imitation learning stage. Then, we propose a novel adversarial imitation learning method to effectively capture complex movements of the schools of fish, enabling efficient imitation of the distribution of motion patterns measured in the latent space. It also incorporates bio-inspired rewards alongside priors to regularize and stabilize training. Once trained, CBIL can be used for various animation tasks with the learned collective motion priors. We further show its effectiveness across different species. Finally, we demonstrate the application of our system in detecting abnormal fish behavior from in-the-wild videos.
Problem

Research questions and friction points this paper is trying to address.

Learn fish schooling behavior from videos without motion trajectories
Extract implicit states from videos using Masked Video AutoEncoder
Detect abnormal fish behavior in real-world videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Masked Video AutoEncoder for self-supervised learning
Adversarial imitation learning captures complex fish movements
Incorporates bio-inspired rewards to stabilize training
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yifan Wu
The University of Hong Kong, Hong Kong
Z
Zhiyang Dou
The University of Hong Kong, Hong Kong; University of Pennsylvania, U.S.A.
Y
Yuko Ishiwaka
SoftBank Corp., Japan
S
Shun Ogawa
SoftBank Corp., Japan
Yuke Lou
Yuke Lou
Peking University
Character AnimationComputer GraphicsComputer Vision
Wenping Wang
Wenping Wang
Texas A&M University
Computer GraphicsGeometric Computing
Lingjie Liu
Lingjie Liu
Assistant Professor at UPenn
Computer GraphicsComputer VisionDeep Learning
Taku Komura
Taku Komura
The University of Hong Kong
Character AnimationComputer GraphicsRobotics