Event Spectroscopy: Event-based Multispectral and Depth Sensing using Structured Light

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In forest environments, UAV-based perception suffers from low depth resolution, strong illumination dependence and latency in multispectral imaging. To address these challenges, this paper introduces the first synchronous multispectral–depth sensing system integrating an event camera with wavelength-tunable structured light. Our method achieves sub-millisecond, high-accuracy depth reconstruction (60% lower RMSE than commercial sensors) via event-driven phase decoding. Simultaneously, wavelength-modulated structured light combined with spectral calibration enables high-resolution depth and multispectral data acquisition on a single sensor. RGB event streams further enhance color reconstruction and material identification, improving material classification accuracy by over 30% without additional hardware. Evaluated in tropical rainforest conditions, the system significantly boosts leaf and branch classification accuracy while achieving spectral fidelity comparable to commercial spectrometers.

Technology Category

Application Category

📝 Abstract
Uncrewed aerial vehicles (UAVs) are increasingly deployed in forest environments for tasks such as environmental monitoring and search and rescue, which require safe navigation through dense foliage and precise data collection. Traditional sensing approaches, including passive multispectral and RGB imaging, suffer from latency, poor depth resolution, and strong dependence on ambient light - especially under forest canopies. In this work, we present a novel event spectroscopy system that simultaneously enables high-resolution, low-latency depth reconstruction and multispectral imaging using a single sensor. Depth is reconstructed using structured light, and by modulating the wavelength of the projected structured light, our system captures spectral information in controlled bands between 650 nm and 850 nm. We demonstrate up to $60%$ improvement in RMSE over commercial depth sensors and validate the spectral accuracy against a reference spectrometer and commercial multispectral cameras, demonstrating comparable performance. A portable version limited to RGB (3 wavelengths) is used to collect real-world depth and spectral data from a Masoala Rainforest. We demonstrate the use of this prototype for color image reconstruction and material differentiation between leaves and branches using spectral and depth data. Our results show that adding depth (available at no extra effort with our setup) to material differentiation improves the accuracy by over $30%$ compared to color-only method. Our system, tested in both lab and real-world rainforest environments, shows strong performance in depth estimation, RGB reconstruction, and material differentiation - paving the way for lightweight, integrated, and robust UAV perception and data collection in complex natural environments.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously achieving high-resolution depth and multispectral imaging for UAVs
Overcoming latency and poor depth resolution in traditional forest sensing
Enabling material differentiation using combined spectral and depth data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-based multispectral and depth sensing using structured light
Modulated wavelength structured light for spectral capture
Single sensor enables simultaneous depth and spectral imaging
🔎 Similar Papers
No similar papers found.
C
Christian Geckeler
Environmental Robotics Laboratory, Dep. of Environmental Systems Science, ETH Zurich, 8092 Zurich, Switzerland; Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), 8903 Birmensdorf, Switzerland.
N
Niklas Neugebauer
Environmental Robotics Laboratory, Dep. of Environmental Systems Science, ETH Zurich, 8092 Zurich, Switzerland; Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), 8903 Birmensdorf, Switzerland.
Manasi Muglikar
Manasi Muglikar
PhD student, University of Zurich
Computer VisionRoboticsEvent-cameras
Davide Scaramuzza
Davide Scaramuzza
Professor of Robotics and Perception, University of Zurich
RoboticsRobot VisionMicro Air VehiclesSLAMRobot Learning
Stefano Mintchev
Stefano Mintchev
Assistant Professor of Environmental Robotics - ETH Zurich
Robotics