🤖 AI Summary
In dense urban environments, emergency drone landings face significant safety risks due to dynamic obstacles (e.g., vehicles, pedestrians) and challenging visual conditions (e.g., abrupt illumination changes, low-texture surfaces).
Method: This paper proposes a vision-based risk-aware framework featuring a novel risk-guided end-to-end decision mechanism. It integrates robust moving-object detection, illumination-invariant pixel-level risk modeling, and spatiotemporally consistent landing-point tracking—departing from conventional static-terrain assumptions. The framework generates semantic-segmentation-driven dynamic risk maps, employs altitude-adaptive safety thresholding, and applies temporal stabilization to ensure decision consistency.
Contribution/Results: Evaluated in diverse real-world urban scenarios, the method achieves over 90% emergency landing success rate, substantially reducing collision and false-landing rates. It demonstrates strong robustness under highly dynamic conditions, low-texture scenes, and rapid illumination variations, validating its practical applicability for urban autonomous drone operations.
📝 Abstract
Landing safely in crowded urban environments remains an essential yet challenging endeavor for Unmanned Aerial Vehicles (UAVs), especially in emergency situations. In this work, we propose a risk-aware approach that harnesses semantic segmentation to continuously evaluate potential hazards in the drone's field of view. By using a specialized deep neural network to assign pixel-level risk values and applying an algorithm based on risk maps, our method adaptively identifies a stable Safe Landing Zone (SLZ) despite moving critical obstacles such as vehicles, people, etc., and other visual challenges like shifting illumination. A control system then guides the UAV toward this low-risk region, employing altitude-dependent safety thresholds and temporal landing point stabilization to ensure robust descent trajectories. Experimental validation in diverse urban environments demonstrates the effectiveness of our approach, achieving over 90% landing success rates in very challenging real scenarios, showing significant improvements in various risk metrics. Our findings suggest that risk-oriented vision methods can effectively help reduce the risk of accidents in emergency landing situations, particularly in complex, unstructured, urban scenarios, densely populated with moving risky obstacles, while potentiating the true capabilities of UAVs in complex urban operations.