Learning Real-World Acrobatic Flight from Human Preferences

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of complex dynamics, rapid motion, and difficulty in formalizing reward objectives in autonomous drone aerobatic flight, this paper proposes a preference-based reinforcement learning framework that eliminates manual reward engineering. Methodologically, it introduces (1) a confidence-aware REC (Reward Ensemble via Confidence) modeling approach to enhance preference learning stability and fidelity in capturing subjective stylistic criteria—such as pose aesthetics and motion smoothness—and (2) Preference PPO, a novel algorithm integrating probabilistic reward modeling with simulation-to-reality transfer, validated within the MuJoCo continuous-control framework. Experimental results demonstrate successful execution of diverse high-agility aerobatic maneuvers on real drones. In simulation, the method achieves 88.4% performance relative to a handcrafted reward baseline—substantially outperforming standard preference RL (55.2%)—and attains higher inter-annotator agreement in human evaluations.

Technology Category

Application Category

📝 Abstract
Preference-based reinforcement learning (PbRL) enables agents to learn control policies without requiring manually designed reward functions, making it well-suited for tasks where objectives are difficult to formalize or inherently subjective. Acrobatic flight poses a particularly challenging problem due to its complex dynamics, rapid movements, and the importance of precise execution. In this work, we explore the use of PbRL for agile drone control, focusing on the execution of dynamic maneuvers such as powerloops. Building on Preference-based Proximal Policy Optimization (Preference PPO), we propose Reward Ensemble under Confidence (REC), an extension to the reward learning objective that improves preference modeling and learning stability. Our method achieves 88.4% of the shaped reward performance, compared to 55.2% with standard Preference PPO. We train policies in simulation and successfully transfer them to real-world drones, demonstrating multiple acrobatic maneuvers where human preferences emphasize stylistic qualities of motion. Furthermore, we demonstrate the applicability of our probabilistic reward model in a representative MuJoCo environment for continuous control. Finally, we highlight the limitations of manually designed rewards, observing only 60.7% agreement with human preferences. These results underscore the effectiveness of PbRL in capturing complex, human-centered objectives across both physical and simulated domains.
Problem

Research questions and friction points this paper is trying to address.

Learning acrobatic drone flight from human preferences
Improving reward modeling for agile maneuver execution
Transferring simulation-trained policies to real-world drones
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference-based reinforcement learning for drone control
Reward Ensemble under Confidence for preference modeling
Simulation-trained policies transferred to real drones
🔎 Similar Papers
No similar papers found.