🤖 AI Summary
This work addresses the problem of complete 3D surface coverage perception of target objects by autonomous UAVs in complex three-dimensional environments. We propose a trajectory–viewpoint co-planning method that jointly optimizes flight path and camera pose. Our key contributions are: (i) the first integration of real-time ray tracing into a receding-horizon optimization framework, where visual visibility serves as the primary objective for joint trajectory and viewpoint optimization; and (ii) an end-to-end optimal control formulation based on mixed-integer programming (MIP), unifying kinematic constraints, viewpoint coverage requirements, and geometric visibility modeling. Evaluated on both synthetic and real-world scenes, the method achieves over 98% 3D surface coverage while improving planning efficiency by 3.2× compared to conventional sequential approaches. It demonstrates significantly enhanced robustness for targets with intricate geometries.
📝 Abstract
This work proposes a jointly optimized trajectory generation and camera control approach, enabling an autonomous agent, such as an unmanned aerial vehicle (UAV) operating in 3D environments, to plan and execute coverage trajectories that maximally cover the surface area of a 3D object of interest. Specifically, the UAV's kinematic and camera control inputs are jointly optimized over a rolling planning horizon to achieve complete 3D coverage of the object. The proposed controller incorporates ray-tracing into the planning process to simulate the propagation of light rays, thereby determining the visible parts of the object through the UAV's camera. This integration enables the generation of precise look-ahead coverage trajectories. The coverage planning problem is formulated as a rolling finite-horizon optimal control problem and solved using mixed-integer programming techniques. Extensive real-world and synthetic experiments validate the performance of the proposed approach.