Learning Task-Agnostic Skill Bases to Uncover Motor Primitives in Animal Behaviors

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the oversimplification inherent in the discrete syllable assumption commonly adopted in animal behavior modeling. We propose a skill-basis-driven imitation learning framework that unsupervisedly disentangles reusable, task-agnostic continuous motor primitives from kinematic trajectories. Methodologically, we introduce the first integration of reinforcement learning with transition-probability-based representation learning to enable interpretable, continuously evolving skill-basis inference. Our approach advances beyond traditional discrete behavioral syllable paradigms through three core innovations: (i) skill-basis representation learning, (ii) dynamically mixed policy parameterization, and (iii) generative trajectory modeling. Extensive evaluation across grid-world environments, maze simulations, and real-world free-behavior video data demonstrates that our method extracts highly transferable skill components and generates more realistic, temporally coherent, and generalizable behavioral trajectories.

Technology Category

Application Category

📝 Abstract
Animals flexibly recombine a finite set of core motor primitives to meet diverse task demands, but existing behavior-segmentation methods oversimplify this process by imposing discrete syllables under restrictive generative assumptions. To reflect the animal behavior generation procedure, we introduce skill-based imitation learning (SKIL) for behavior understanding, a reinforcement learning-based imitation framework that (1) infers interpretable skill sets, i.e., latent basis functions of behavior, by leveraging representation learning on transition probabilities, and (2) parameterizes policies as dynamic mixtures of these skills. We validate our approach on a simple grid world, a discrete labyrinth, and unconstrained videos of freely moving animals. Across tasks, it identifies reusable skill components, learns continuously evolving compositional policies, and generates realistic trajectories beyond the capabilities of traditional discrete models. By exploiting generative behavior modeling with compositional representations, our method offers a concise, principled account of how complex animal behaviors emerge from dynamic combinations of fundamental motor primitives.
Problem

Research questions and friction points this paper is trying to address.

Identify reusable motor primitives in animal behaviors
Learn dynamic skill mixtures for behavior policies
Generate realistic trajectories beyond discrete models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for imitation
Infers interpretable skill sets
Parameterizes policies as skill mixtures
🔎 Similar Papers
No similar papers found.
J
Jiyi Wang
School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332
Jingyang Ke
Jingyang Ke
Georgia Tech
Reinforcement LearningMultimodal LLMComputational Neuroscience
B
Bo Dai
School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332
Anqi Wu
Anqi Wu
Assistant Professor, Computational Science and Engineering, Georgia Tech
machine learningcomputational and statistical neuroscience