🤖 AI Summary
To address frequent identity switches and insufficient robustness in 3D multi-object tracking—caused by occlusions and viewpoint variations—this paper proposes a camera-LiDAR fusion, two-stage BEV Transformer framework. We innovatively design a query-based cross-modal fusion architecture without explicit motion modeling, achieving geometric-semantic joint encoding to align and complement multi-view images and LiDAR point clouds in the bird’s-eye view. Additionally, a temporal sliding-window smoother and a bounding-box trajectory optimization module are introduced to enhance identity consistency modeling. Evaluated on the nuScenes test set, our method achieves 74.7 aMOTA, with significantly reduced ID switches, demonstrating a favorable balance between high accuracy and strong stability. This validates the effectiveness of a multimodal feature-driven Transformer tracking paradigm.
📝 Abstract
We propose FutrTrack, a modular camera-LiDAR multi-object tracking framework that builds on existing 3D detectors by introducing a transformer-based smoother and a fusion-driven tracker. Inspired by query-based tracking frameworks, FutrTrack employs a multimodal two-stage transformer refinement and tracking pipeline. Our fusion tracker integrates bounding boxes with multimodal bird's-eye-view (BEV) fusion features from multiple cameras and LiDAR without the need for an explicit motion model. The tracker assigns and propagates identities across frames, leveraging both geometric and semantic cues for robust re-identification under occlusion and viewpoint changes. Prior to tracking, we refine sequences of bounding boxes with a temporal smoother over a moving window to refine trajectories, reduce jitter, and improve spatial consistency. Evaluated on nuScenes and KITTI, FutrTrack demonstrates that query-based transformer tracking methods benefit significantly from multimodal sensor features compared with previous single-sensor approaches. With an aMOTA of 74.7 on the nuScenes test set, FutrTrack achieves strong performance on 3D MOT benchmarks, reducing identity switches while maintaining competitive accuracy. Our approach provides an efficient framework for improving transformer-based trackers to compete with other neural-network-based methods even with limited data and without pretraining.