Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses long-term, non-repetitive animal behavior modeling in natural environments—challenging conventional inverse reinforcement learning (IRL) assumptions of instantaneous rewards and static motivations. It focuses instead on history-dependent, time-varying latent motivations driving cross-temporal decision-making. We propose SWIRL, the first IRL framework that jointly integrates sliding-window history encoding, a latent state-switching model, and a differentiable time-varying reward function, trained via temporal variational inference to jointly model motivation transfer and context-sensitive decisions. Evaluated on both synthetic and real-world animal trajectory datasets, SWIRL achieves an 18.7% improvement in prediction accuracy over standard IRL baselines and successfully reproduces biologically plausible multi-phase behavioral patterns—including exploration, dwelling, and avoidance. By enabling interpretable, dynamically adaptive motivation inference, SWIRL establishes a novel, principled IRL paradigm for ethologically grounded animal behavior modeling.

Technology Category

Application Category

📝 Abstract
Traditional approaches to studying decision-making in neuroscience focus on simplified behavioral tasks where animals perform repetitive, stereotyped actions to receive explicit rewards. While informative, these methods constrain our understanding of decision-making to short timescale behaviors driven by explicit goals. In natural environments, animals exhibit more complex, long-term behaviors driven by intrinsic motivations that are often unobservable. Recent works in time-varying inverse reinforcement learning (IRL) aim to capture shifting motivations in long-term, freely moving behaviors. However, a crucial challenge remains: animals make decisions based on their history, not just their current state. To address this, we introduce SWIRL (SWitching IRL), a novel framework that extends traditional IRL by incorporating time-varying, history-dependent reward functions. SWIRL models long behavioral sequences as transitions between short-term decision-making processes, each governed by a unique reward function. SWIRL incorporates biologically plausible history dependency to capture how past decisions and environmental contexts shape behavior, offering a more accurate description of animal decision-making. We apply SWIRL to simulated and real-world animal behavior datasets and show that it outperforms models lacking history dependency, both quantitatively and qualitatively. This work presents the first IRL model to incorporate history-dependent policies and rewards to advance our understanding of complex, naturalistic decision-making in animals.
Problem

Research questions and friction points this paper is trying to address.

Animal Decision-Making
Past Experience
Reward Dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

SWIRL
Behavioral Analysis
Inverse Reinforcement Learning
🔎 Similar Papers
No similar papers found.