🤖 AI Summary
To address the strong inter-detection dependency, low parallelism, and high computational overhead of the Generalized Labeled Multi-Bernoulli (GLMB) filter in virtual sensor networks—stemming from its single-detection-per-scan assumption—this paper proposes a novel GLMB variant supporting multi-detection per sensor. By relaxing the conventional detection-wise dependency constraint, the method decouples the update process across detections, thereby significantly enhancing parallel scalability. It further leverages GPU architecture for efficient hypothesis management and state estimation. Experiments demonstrate near-linear scalability in runtime with increasing track count and maximum retained hypotheses, achieving substantially higher throughput than the standard GLMB implementation. The core contribution is the first integration of multi-detection inputs and dependency-free update mechanisms into the GLMB framework, enabling high-concurrency, low-latency distributed random finite set tracking.
📝 Abstract
Much recent research on multi-target tracking has focused on multi-hypothesis approaches leveraging random finite sets. Of particular interest are labeled random finite set methods that maintain temporally coherent labels for each object. While these methods enjoy important theoretical properties as closed-form solutions to the multi-target Bayes filter, the maintenance of multiple hypotheses under the standard measurement model is highly computationally expensive, even when hypothesis pruning approximations are applied. In this work, we focus on the Generalized Labeled Multi-Bernoulli (GLMB) filter as an example of this class of methods. We investigate a variant of the filter that allows multiple detections per object from the same sensor, a critical capability when deploying tracking in the context of distributed networks of machine learning-based virtual sensors. We show that this breaks the inter-detection dependencies in the filter updates of the standard GLMB filter, allowing updates with significantly improved parallel scalability and enabling efficient deployment on GPU hardware. We report the results of a preliminary analysis of a GPU-accelerated implementation of our proposed GLMB tracker, with a focus on run time scalability with respect to the number of objects and the maximum number of retained hypotheses.