Multiple Object Tracking as ID Prediction

πŸ“… 2024-03-25
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 9
✨ Influential: 4
πŸ“„ PDF
πŸ€– AI Summary
Multi-object tracking (MOT) has long suffered from poor generalizability and complex hyperparameter tuning due to hand-crafted matching rules and the two-stage paradigm. This work reformulates object association as an end-to-end contextual identity prediction task: leveraging historical trajectories and their associated IDs as contextual prompts, a lightweight Transformer models appearance-motion embeddings to directly classify identities for detections in the current frame. We propose MOTIPβ€”a fully end-to-end trainable framework that eliminates heuristic matching and manually designed affinity costs. Evaluated on challenging benchmarks with severe occlusion and low frame rates (e.g., DanceTrack and SportsMOT), MOTIP achieves state-of-the-art performance. On MOT17, it matches leading Transformer-based methods while significantly improving training efficiency and cross-domain generalization capability.

Technology Category

Application Category

πŸ“ Abstract
In Multiple Object Tracking (MOT), tracking-by-detection methods have stood the test for a long time, which split the process into two parts according to the definition: object detection and association. They leverage robust single-frame detectors and treat object association as a post-processing step through hand-crafted heuristic algorithms and surrogate tasks. However, the nature of heuristic techniques prevents end-to-end exploitation of training data, leading to increasingly cumbersome and challenging manual modification while facing complicated or novel scenarios. In this paper, we regard this object association task as an End-to-End in-context ID prediction problem and propose a streamlined baseline called MOTIP. Specifically, we form the target embeddings into historical trajectory information while considering the corresponding IDs as in-context prompts, then directly predict the ID labels for the objects in the current frame. Thanks to this end-to-end process, MOTIP can learn tracking capabilities straight from training data, freeing itself from burdensome hand-crafted algorithms. Without bells and whistles, our method achieves impressive state-of-the-art performance in complex scenarios like DanceTrack and SportsMOT, and it performs competitively with other transformer-based methods on MOT17. We believe that MOTIP demonstrates remarkable potential and can serve as a starting point for future research. The code is available at https://github.com/MCG-NJU/MOTIP.
Problem

Research questions and friction points this paper is trying to address.

Transform Multi-Object Tracking into ID prediction task
Replace heuristic tracking with end-to-end trainable method
Achieve state-of-the-art results without complex architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Treats MOT as in-context ID prediction
Uses end-to-end trainable association task
Leverages object-level features for tracking
πŸ”Ž Similar Papers
Ruopeng Gao
Ruopeng Gao
Nanjing University
Computer VisionDeep LearningMultiple Object TrackingMultimodal and Generative
Y
Yijun Zhang
China Mobile (Suzhou) Software Technology Co., Ltd.
L
Limin Wang
Nanjing University, Shanghai AI Lab