🤖 AI Summary
To address low control precision and poor scene adaptability of freely mobile camera robots in dolly-in cinematography, this paper proposes a reinforcement learning–based end-to-end robust control framework. The method jointly models motion control and gimbal attitude regulation—replacing conventional decoupled PD controllers—and integrates a camera-gimbal system on the ROSBot 2.0 platform to enable closed-loop autonomous dolly-in capture in both simulation and real-world environments. Compared to baseline approaches, our framework achieves significantly improved trajectory tracking accuracy (42% reduction in mean tracking error) and enhanced dynamic stability, demonstrating strong robustness under unstructured terrain and varying illumination conditions. This work provides a deployable technical pathway for intelligent cinematic camera motion control and advances the practical adoption of reinforcement learning in professional filmmaking applications.
📝 Abstract
Free-roaming dollies enhance filmmaking with dynamic movement, but challenges in automated camera control remain unresolved. Our study advances this field by applying Reinforcement Learning (RL) to automate dolly-in shots using free-roaming ground-based filming robots, overcoming traditional control hurdles. We demonstrate the effectiveness of combined control for precise film tasks by comparing it to independent control strategies. Our robust RL pipeline surpasses traditional Proportional-Derivative controller performance in simulation and proves its efficacy in real-world tests on a modified ROSBot 2.0 platform equipped with a camera turret. This validates our approach's practicality and sets the stage for further research in complex filming scenarios, contributing significantly to the fusion of technology with cinematic creativity. This work presents a leap forward in the field and opens new avenues for research and development, effectively bridging the gap between technological advancement and creative filmmaking.