AsterNav: Autonomous Aerial Robot Navigation In Darkness Using Passive Computation

📅 2026-01-24
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of safe autonomous navigation for micro aerial vehicles in GPS-denied, completely dark environments—a critical limitation for post-disaster search and rescue missions. The authors propose an onboard perception system that fuses an infrared monocular camera, a large-aperture coded aperture lens, and structured light to enable depth estimation via a defocus-driven model named AsterNet. This approach achieves fully onboard, monocular structured-light-based navigation in total darkness without requiring fine-tuning on real-world data, as AsterNet generalizes directly from optical simulation to physical deployment and exhibits strong robustness to variations in structured light patterns. Experimental results demonstrate a 95.5% mission success rate in navigating through complex, unknown environments containing diverse obstacles—including thin ropes as narrow as 6.25 mm in diameter—highlighting the system’s reliability and practicality.

Technology Category

Application Category

📝 Abstract
Autonomous aerial navigation in absolute darkness is crucial for post-disaster search and rescue operations, which often occur from disaster-zone power outages. Yet, due to resource constraints, tiny aerial robots, perfectly suited for these operations, are unable to navigate in the darkness to find survivors safely. In this paper, we present an autonomous aerial robot for navigation in the dark by combining an Infra-Red (IR) monocular camera with a large-aperture coded lens and structured light without external infrastructure like GPS or motion-capture. Our approach obtains depth-dependent defocus cues (each structured light point appears as a pattern that is depth dependent), which acts as a strong prior for our AsterNet deep depth estimation model. The model is trained in simulation by generating data using a simple optical model and transfers directly to the real world without any fine-tuning or retraining. AsterNet runs onboard the robot at 20 Hz on an NVIDIA Jetson Orin$^\text{TM}$ Nano. Furthermore, our network is robust to changes in the structured light pattern and relative placement of the pattern emitter and IR camera, leading to simplified and cost-effective construction. We successfully evaluate and demonstrate our proposed depth navigation approach AsterNav using depth from AsterNet in many real-world experiments using only onboard sensing and computation, including dark matte obstacles and thin ropes (diameter 6.25mm), achieving an overall success rate of 95.5% with unknown object shapes, locations and materials. To the best of our knowledge, this is the first work on monocular, structured-light-based quadrotor navigation in absolute darkness.
Problem

Research questions and friction points this paper is trying to address.

autonomous aerial navigation
absolute darkness
tiny aerial robots
post-disaster search and rescue
onboard sensing
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured light
monocular depth estimation
autonomous navigation in darkness
simulation-to-reality transfer
onboard deep learning
🔎 Similar Papers
No similar papers found.
Deepak Singh
Deepak Singh
Robotics PhD Student at WPI
RoboticsMachine LearningComputer Vision
S
Shreyas Khobragade
Perception and Autonomous Robotics (PeAR) Group, Worcester Polytechnic Institute
N
N. Sanket
Perception and Autonomous Robotics (PeAR) Group, Worcester Polytechnic Institute