TOFFE -- Temporally-binned Object Flow from Events for High-speed and Energy-Efficient Object Detection and Tracking

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency and power consumption in object detection and tracking for resource-constrained edge platforms—such as micro-drones—under high-speed motion, this paper proposes TOFFE, a lightweight brain-inspired–conventional hybrid framework based on event cameras. TOFFE innovatively introduces the “Object Flow” representation, integrates spatiotemporal binning encoding with co-trained spiking neural networks (SNNs) and artificial neural networks (ANNs), and establishes the first synthetic event dataset tailored for high-speed motion scenarios. Deployed on a Loihi-2 + Jetson TX2 heterogeneous platform, TOFFE achieves significant efficiency gains while preserving accuracy: on Jetson TX2 alone, it reduces energy consumption by 5.7× and inference latency by 4.6× over baseline methods; on the full heterogeneous platform, improvements reach 8.3× in energy reduction and 5.8× in latency reduction.

Technology Category

Application Category

📝 Abstract
Object detection and tracking is an essential perception task for enabling fully autonomous navigation in robotic systems. Edge robot systems such as small drones need to execute complex maneuvers at high-speeds with limited resources, which places strict constraints on the underlying algorithms and hardware. Traditionally, frame-based cameras are used for vision-based perception due to their rich spatial information and simplified synchronous sensing capabilities. However, obtaining detailed information across frames incurs high energy consumption and may not even be required. In addition, their low temporal resolution renders them ineffective in high-speed motion scenarios. Event-based cameras offer a biologically-inspired solution to this by capturing only changes in intensity levels at exceptionally high temporal resolution and low power consumption, making them ideal for high-speed motion scenarios. However, their asynchronous and sparse outputs are not natively suitable with conventional deep learning methods. In this work, we propose TOFFE, a lightweight hybrid framework for performing event-based object motion estimation (including pose, direction, and speed estimation), referred to as Object Flow. TOFFE integrates bio-inspired Spiking Neural Networks (SNNs) and conventional Analog Neural Networks (ANNs), to efficiently process events at high temporal resolutions while being simple to train. Additionally, we present a novel event-based synthetic dataset involving high-speed object motion to train TOFFE. Our experimental results show that TOFFE achieves 5.7x/8.3x reduction in energy consumption and 4.6x/5.8x reduction in latency on edge GPU(Jetson TX2)/hybrid hardware(Loihi-2 and Jetson TX2), compared to previous event-based object detection baselines.
Problem

Research questions and friction points this paper is trying to address.

Energy Efficiency
High-speed Motion
Event-based Cameras
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pulse Neural Networks
Event-based Cameras
Energy-efficient Target Detection
🔎 Similar Papers
Adarsh Kumar Kosta
Adarsh Kumar Kosta
PhD student, C-BRIC, Purdue University
Neuromorphic computingDeep learningSpiking Neural networksEvent-based vision
A
Amogh Joshi
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
A
Arjun Roy
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
Rohan Kumar Manna
Rohan Kumar Manna
Graduate Research Assistant, Purdue University
Computer VisionRoboticsDeep LearningAutonomous Vehicles
Manish Nagaraj
Manish Nagaraj
Purdue University
Data EfficiencyTraining Data AttrbutionDeep LearningFederated LearningComputer Vision
K
Kaushik Roy
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA