Radar and Event Camera Fusion for Agile Robot Ego-Motion Estimation

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-speed agile robots (e.g., acrobatic UAVs) suffer from degraded motion estimation in high-dynamic, textureless environments due to sensor latency, motion blur, and image distortion—especially when relying solely on conventional cameras or IMUs. Method: We propose an IMU-free, feature-matching-free heterogeneous sensing fusion framework that directly couples continuous-time event streams from an event camera with Doppler velocity measurements from mmWave radar. A continuous-time state-space model is formulated, and asynchronous temporal fusion is performed via a fixed-lag smoother operating at millisecond-level latency. Contribution/Results: The method significantly improves robustness and real-time performance of velocity estimation under extreme maneuvering and textureless conditions. Evaluated on a custom high-dynamic dataset, it achieves sub-meter-per-second velocity accuracy and supports low-power edge deployment.

Technology Category

Application Category

📝 Abstract
Achieving reliable ego motion estimation for agile robots, e.g., aerobatic aircraft, remains challenging because most robot sensors fail to respond timely and clearly to highly dynamic robot motions, often resulting in measurement blurring, distortion, and delays. In this paper, we propose an IMU-free and feature-association-free framework to achieve aggressive ego-motion velocity estimation of a robot platform in highly dynamic scenarios by combining two types of exteroceptive sensors, an event camera and a millimeter wave radar, First, we used instantaneous raw events and Doppler measurements to derive rotational and translational velocities directly. Without a sophisticated association process between measurement frames, the proposed method is more robust in texture-less and structureless environments and is more computationally efficient for edge computing devices. Then, in the back-end, we propose a continuous-time state-space model to fuse the hybrid time-based and event-based measurements to estimate the ego-motion velocity in a fixed-lagged smoother fashion. In the end, we validate our velometer framework extensively in self-collected experiment datasets. The results indicate that our IMU-free and association-free ego motion estimation framework can achieve reliable and efficient velocity output in challenging environments. The source code, illustrative video and dataset are available at https://github.com/ZzhYgwh/TwistEstimator.
Problem

Research questions and friction points this paper is trying to address.

Achieving reliable ego-motion estimation for agile robots
Robust velocity estimation in texture-less environments
Fusing event camera and radar for dynamic motion tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses event camera and radar for motion estimation
Uses raw events and Doppler for direct velocity
Continuous-time model for hybrid measurement fusion
🔎 Similar Papers
No similar papers found.
Y
Yang Lyu
School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710129 P.R. China
Z
Zhenghao Zou
School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710129 P.R. China
Y
Yanfeng Li
School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710129 P.R. China
Chunhui Zhao
Chunhui Zhao
Professor, IET Fellow, CAA Fellow, Zhejiang University
machine learningtime series analysisLLMindustrial intelligence
Q
Quan Pan
School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710129 P.R. China