Learning Dolly-In Filming From Demonstration Using a Ground-Based Robot

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated dolly-in cinematography traditionally relies on handcrafted reward functions and tedious hyperparameter tuning, hindering creative expression. Method: This paper proposes an end-to-end imitation learning framework based on Generative Adversarial Imitation Learning (GAIL), which learns stylistically rich and temporally smooth camera motion policies directly from expert teleoperation trajectories in simulation—eliminating explicit reward engineering and enabling zero-shot transfer to a real ground robot. Contribution/Results: To our knowledge, this is the first successful application of GAIL to real-time, high-precision cinematic camera motion control. In simulation, our method converges faster and exhibits lower policy variance than PPO. On physical hardware, it significantly outperforms TD3 in composition stability and subject alignment accuracy. By removing the reward design bottleneck and supporting cross-platform zero-shot deployment, our approach establishes a scalable paradigm for creative robotics.

Technology Category

Application Category

📝 Abstract
Cinematic camera control demands a balance of precision and artistry - qualities that are difficult to encode through handcrafted reward functions. While reinforcement learning (RL) has been applied to robotic filmmaking, its reliance on bespoke rewards and extensive tuning limits creative usability. We propose a Learning from Demonstration (LfD) approach using Generative Adversarial Imitation Learning (GAIL) to automate dolly-in shots with a free-roaming, ground-based filming robot. Expert trajectories are collected via joystick teleoperation in simulation, capturing smooth, expressive motion without explicit objective design. Trained exclusively on these demonstrations, our GAIL policy outperforms a PPO baseline in simulation, achieving higher rewards, faster convergence, and lower variance. Crucially, it transfers directly to a real-world robot without fine-tuning, achieving more consistent framing and subject alignment than a prior TD3-based method. These results show that LfD offers a robust, reward-free alternative to RL in cinematic domains, enabling real-time deployment with minimal technical effort. Our pipeline brings intuitive, stylized camera control within reach of creative professionals, bridging the gap between artistic intent and robotic autonomy.
Problem

Research questions and friction points this paper is trying to address.

Automating dolly-in shots with free-roaming filming robot
Eliminating handcrafted reward functions in robotic cinematography
Bridging artistic intent and robotic autonomy through imitation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning from Demonstration using GAIL
Automated dolly-in shots with free-roaming robot
Direct real-world transfer without fine-tuning
🔎 Similar Papers
No similar papers found.
P
Philip Lorimer
Department of Computer Science, University of Bath, UK
Alan Hunter
Alan Hunter
University of Bath
Underwater remote sensingsonarultrasonicsmarine robotics
W
Wenbin Li
Department of Computer Science, University of Bath, UK